Article

Cognition in Context: Phenomenology, Situated Robotics and the Frame Problem

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfus’ Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents realize the property of thrownness (the property of being always already embedded in a context). I argue that this positive proposal is incomplete until we understand exactly how the properties in question may be instantiated in machines like us. So, working within a broadly Heideggerian conceptual framework, I pursue the character of a representationshunning thrown machine. As part of this analysis, I suggest that the frame problem is, in truth, a two-headed beast. The intra-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action within a context. The inter-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action in worlds in which adaptation to new contexts is open-ended and in which the number of potential contexts is indeterminate. Drawing on the field of situated robotics, I suggest that the intra-context frame problem may be neutralized by systems of special purpose adaptive couplings, while the inter-context frame problem may be neutralized by systems that exhibit the phenomenon of continuous reciprocal causation. I also defend the view that while continuous reciprocal causation is in conflict with representational explanation, special-purpose adaptive coupling, as well as its associated agential phenomenology, may feature representations. My proposal has been criticized recently by Dreyfus, who accuses me of propagating a cognitivist misreading of Heidegger, one that, because it maintains a role for representation, leads me seriously astray in my handling of the frame problem. I close by responding to Dreyfus’ concerns.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... According to Maurice Merleau-Ponty (1945/2012), pre-reflective intelligence does not necessarily prevent reflection but certainly constrains and demarcates it, as these two categories are in a background-foreground relationship: despite their reciprocity, the pre-reflective is more primordial than the reflective, because it operates before and below contentful cognition, as its transcendental precondition (Jackson, 2018;Moya, 2014;Reynolds, 2006). As such, the pre-reflective defines both the ontological boundary and the functional limit of the reflective (Dreyfus, 1996(Dreyfus, , 2000(Dreyfus, , 2008Moe 2007;McManus, 2008;Wheeler, 2008). Dreyfus uses without apparent distinction two different characterisations of prereflective intelligence, in a way that is potentially ambiguous: first, as non-conscious, non-cognitive, non-agential, or even "mindless" (Eriksen, 2010;Zahavi, 2012); and, second, as non-representational, i.e. guided solely by the contentless solicitations produced by online experience. ...
... Experts draw from this horizon their hermeneutical capacity to discriminate between very similar contexts and confidently navigate the familiar ones. Dreyfus, appealing to the Heideggerian notion of "Being-in-the-world" (Heidegger 1927;Taylor, 1993;Wheeler, 2005Wheeler, , 2008, calls this horizon "background" to indicate the incalculable number of formal and material preconditions behind every particular variety of skilful coping (Dreyfus, 2008(Dreyfus, , 2012Fusche Moe, 2004). The background concretely shapes and fills of operational significance the prereflective know-how that informs the expert's decisions during their performance. ...
... Conversely, the expert is exactly the sort of agent who can count on the holistic (systemic, non-analysable in discrete parts) and situated (sensitive to an unquantifiable number of massively interconnected experiential particulars) pre-comprehension (nonconceptual competent familiarity) of their own actions' practical background (the inexhaustible horizon of pre-conditions that implicitly defines the causes and reasons of their actions, see Dreyfus, 2012;Cappuccio & Wheeler, 2012). Control is skilful only if it is sufficiently fine-grained and flexible to incorporate this contextual precomprehension (Wheeler, 2008;Cappuccio & Wheeler, 2010a, 2010b. Only contentless, embodied habitual dispositions, not contentful knowledge, are able to sustain a practically relevant pre-comprehension: while representation of facts is selfcontained, value-neutral, merely receptive, and contextually detached, well-trained habitual dispositions allow perception to be active, intelligent, value-laden, predictive, and practically tuned to numerous contexts so to enable the fast, precise adaptivity required to perform skilfully (Beilock & Gray, 2012;Craighero et al., 1999;Gray, 2014;Rizzolatti & Craighero, 2010;Witt et al., 2007). ...
Article
Full-text available
Skilful expertise is grounded in practical, performative knowledge-how, not in detached, spectatorial knowledge-that, and knowledge-how is embodied by habitual dispositions, not representation of facts and rules. Consequently, as action control is a key requirement for the intelligent selection, initiation, and regulation of skilful performance, habitual action control, i.e. the kind of action control based on habitual dispositions, is the true hallmark of skill and the only veridical criterion to evaluate expertise. Not only does this imply that knowledge-that does not make your actions more skilful, but it also implies that it makes them less skilful. This thesis, that I call Radical Habitualism, finds a precursor in Hubert Dreyfus. His approach is considered extreme by most philosophers of skill & expertise: an agent –says Dreyfus– does not perform like an expert when they lack the embodied dispositions necessary to control their action habitually or when they stop relying on such dispositions to control their actions. Thus, one cannot perform skilfully if their actions are guided by representations (isomorphic schemas, explicit rules, and contentful instructions), as the know-that that they convey disrupts or diminishes the agent’s habitual engagement with the task at hand. In defence of Radical Habitualism, I will argue that only the contentless know-how embedded in habitual dispositions fulfils (i) the genetic, (ii) the normative, and (iii) the epistemic requirements of skilful performance. I will examine the phenomenological premises supporting Dreyfus’ approach, clarify their significance for a satisfactory normative and explanatory account of skilful expertise, and rebut the most common objections raised by both intellectualists and conciliatory habitualists, concerning hybrid actions guided by a mix of habitual and representational forms of control. In revisiting Dreyfus anti-representationalist approach, I will particularly focus on its epistemological implications, de-emphasizing other considerations related to conscious awareness.
... In prior work in HCI, ready-to-hand use has been associated with engaged, natural and "fluid" technology use [2,28,68,81,92], with the sense the tool is "part of us" [9], and with locus of attention [2,3,81]. 'Present-at-hand' and 'unready-to-hand' modes of engagement have been associated with "breakdowns" in fluid interaction, and with lack of skill or familiarity [10,81,95]; but also with useful and valuable behaviours such as reflection and analysis [15,34,66,96] problem solving [34,96,99], and conscious awareness of the tool's properties [2,3,29,81,96]. ...
... In prior work in HCI, ready-to-hand use has been associated with engaged, natural and "fluid" technology use [2,28,68,81,92], with the sense the tool is "part of us" [9], and with locus of attention [2,3,81]. 'Present-at-hand' and 'unready-to-hand' modes of engagement have been associated with "breakdowns" in fluid interaction, and with lack of skill or familiarity [10,81,95]; but also with useful and valuable behaviours such as reflection and analysis [15,34,66,96] problem solving [34,96,99], and conscious awareness of the tool's properties [2,3,29,81,96]. ...
... In prior work in HCI, ready-to-hand use has been associated with engaged, natural and "fluid" technology use [2,28,68,81,92], with the sense the tool is "part of us" [9], and with locus of attention [2,3,81]. 'Present-at-hand' and 'unready-to-hand' modes of engagement have been associated with "breakdowns" in fluid interaction, and with lack of skill or familiarity [10,81,95]; but also with useful and valuable behaviours such as reflection and analysis [15,34,66,96] problem solving [34,96,99], and conscious awareness of the tool's properties [2,3,29,81,96]. ...
Conference Paper
Full-text available
The philosophical construct readiness-to-hand describes focused, intuitive, tool use, and has been linked to tool-embodiment and immersion. The construct has been influential in HCI and design for decades, but researchers currently lack appropriate measures and tools to investigate it empirically. To support such empirical work we investigate the possibility of operationalising readiness-to-hand in measurements of multfractality in movement, building on recent work in cognitive science. We conduct two experiments(N=44, N=30) investigating multifractality in mouse movements during a computer game, replicating prior results and contributing new findings. Our results show that multifractality correlates with dimensions associated with readiness-to-hand, including skill and task-engagement, during tool breakdown, task learning and normal We describe future possibilities for the application of these methods in HCI, supporting such work by sharing scripts and data (https://osf.io/2hm9u/), and introducing a new data-driven approach to parameter selection
... In prior work in HCI, ready-to-hand use has been associated with engaged, natural and "fluid" technology use [2,28,68,81,92], with the sense the tool is "part of us" [9], and with locus of attention [2,3,81]. 'Present-at-hand' and 'unready-to-hand' modes of engagement have been associated with "breakdowns" in fluid interaction, and with lack of skill or familiarity [10,81,95]; but also with useful and valuable behaviours such as reflection and analysis [15,34,66,96] problem solving [34,96,99], and conscious awareness of the tool's properties [2,3,29,81,96]. ...
... In prior work in HCI, ready-to-hand use has been associated with engaged, natural and "fluid" technology use [2,28,68,81,92], with the sense the tool is "part of us" [9], and with locus of attention [2,3,81]. 'Present-at-hand' and 'unready-to-hand' modes of engagement have been associated with "breakdowns" in fluid interaction, and with lack of skill or familiarity [10,81,95]; but also with useful and valuable behaviours such as reflection and analysis [15,34,66,96] problem solving [34,96,99], and conscious awareness of the tool's properties [2,3,29,81,96]. ...
... In prior work in HCI, ready-to-hand use has been associated with engaged, natural and "fluid" technology use [2,28,68,81,92], with the sense the tool is "part of us" [9], and with locus of attention [2,3,81]. 'Present-at-hand' and 'unready-to-hand' modes of engagement have been associated with "breakdowns" in fluid interaction, and with lack of skill or familiarity [10,81,95]; but also with useful and valuable behaviours such as reflection and analysis [15,34,66,96] problem solving [34,96,99], and conscious awareness of the tool's properties [2,3,29,81,96]. ...
Preprint
Full-text available
The philosophical construct readiness-to-hand describes focused, intuitive, tool use, and has been linked to tool-embodiment and immersion. The construct has been influential in HCI and design for decades, but researchers currently lack appropriate measures and tools to investigate it empirically. To support such empiricalwork we investigate the possibility of operationalising readinessto-hand in measurements of multfractality in movement, building on recent work in cognitive science. We conduct two experiments (N=44, N=30) investigating multifractality in mouse movements during a computer game, replicating prior results and contributing new findings. Our results show that multifractality correlates with dimensions associated with readiness-to-hand, including skill and task-engagement, during tool breakdown, task learning and normal play. We describe future possibilities for the application of these methods in HCI, supporting such work by sharing scripts and data (https://osf.io/2hm9u/), and introducing a new data-driven approach to parameter selection.
... I argue that embodied understanding and conceptual-representational understanding interact through schematic structure. I demonstrate that common conceptions of these two kinds of understanding, such as developed by Wheeler (2005Wheeler ( , 2008 and Dreyfus (2007aDreyfus ( , b, 2013, entail a separation between them that gives rise to significant problems. Notably, it becomes unclear how they could interact; a problem that has been pointed out by Dreyfus (2007aDreyfus ( , b, 2013 and McDowell (2007) in particular. ...
... Let us first focus on the phenomenologically inspired work of Michael Wheeler (2005Wheeler ( , 2008 and Hubert Dreyfus (2007a, 2007b, 2013. Both differentiate between different modes of engagement based on phenomenological analyses that are inspired by Martin Heidegger's (1962) and Maurice Merleau-Ponty's (2012) work. ...
... This background is conceived of as a holistic and contextual background structure that allows us to act and interact with our living world. This pre-conceptual background is then contrasted with explicit concept use (Dreyfus 2007a, 2007b, Wheeler 2008Hutto 2012;Dreyfus and Taylor 2015). ...
Article
I argue that embodied understanding and conceptual-representational understanding interact through schematic structure. I demonstrate that common conceptions of these two kinds of understanding, such as developed by Wheeler (2005, 2008) and Dreyfus (2007a, b, 2013), entail a separation between them that gives rise to significant problems. Notably, it becomes unclear how they could interact; a problem that has been pointed out by Dreyfus (2007a, b, 2013) and McDowell (2007) in particular. I propose a Kantian strategy to close the gap between them. I argue that embodied and conceptual-representational understanding are governed by schemata. Since they are governed by schemata, they can interact through a structure that they have in common. Finally, I spell out two different ways to conceive of the schematic interaction between them-a close, grounding relationship and a looser relationship that allows for a minimal interaction, but preserves the autonomy of both forms of understanding.
... (For some of the details of this transformation, see for instance Kiverstein, forthcoming; Wheeler and di Paolo, forthcoming.) First advanced by Dreyfus (for example, 1992Dreyfus (for example, , 2002aDreyfus (for example, , 2002b, and then in various forms by, for example, Kelly (2000Kelly ( , 2002, Rietveld (2008, forthcoming) and Wheeler (2005Wheeler ( , 2008Wheeler ( , 2010Cappuccio and Wheeler, 2010), the approach in question draws its inspiration from phenomenological thinkers such as Heidegger and Merleau-Ponty. In its most prominent form (to be placed under scrutiny here), the view takes everyday intelligent activity to be most revealingly, characterized by a mode of engagement with environmental entities that Dreyfus (2002a) has dubbed 'absorbed coping', understood as the skilful and fluid adjustment of behaviour to context-dependent contingencies by way of a richly adaptive, direct (that is, unmediated by representations or any subject-object interface), situated sensitiveness to what is relevant. ...
... First there is an intra-context problem, which challenges us to say how a naturalistically discharged system is able to achieve appropriate, flexible, and fluid action within a context. Then there is an inter-context problem, which challenges us to say how a naturalistically discharged system is able to flexibly and fluidly switch between an open-ended sequence of contexts in a relevancesensitive manner (Wheeler 2008(Wheeler , 2010. If this distinction between an intra-context and an inter-context problem of relevance is indeed genuine (criticisms of the distinction will be considered later), an intriguing question suggests itself: are the nonrepresentational processes that we have met so far under the banner of Dreyfusian ground-level intelligence sufficient to account not only for our within-context sensitivity to relevance, but also for our capacity for relevance-sensitive, open-ended context-switching? ...
... Heidegger's analysis suggests further that the kind of practical problemsolving distinctive of un-readiness-to-hand involves representational states (Wheeler, 2005(Wheeler, , 2008(Wheeler, , 2010. Crucially, however, these are not the fullblooded cognitivist representations that plausibly mediate epistemic access to the present-at-hand. ...
Chapter
Full-text available
Studies of embodied intelligence have often tended to focus on the essentially responsive aspects of bodily expertise (for example, catching a ball once it has been hit into the air). But skilled sportsmen and sportswomen, actors and actresses, dancers, orators, and other performers often execute ritual-like gestures or other fixed action routines as performance-optimizing elements in their pre-performance preparations, especially when daunting or unfamiliar conditions are anticipated. For example, a recent movie (The King’s Speech) and a book of memories (Logue and Conradi, 2010) have revealed that, just before broadcasting his historic announcement that the United Kingdom was entering the Second World War, King George VI furiously repeated certain tongue twisters in a resolute effort to overcome his relentless stutter. Such ritualized actions don’t merely change the causal relations between performers and their physical environments (although this may well be part of their function), but they provide performers with the practical scaffolds that summon more favourable contexts for their accomplishments, by uncovering viable landscapes for effective action rather than unassailable barricades of frightening obstacles. In other words, while the kinds of embodied skills that have occupied many recent theorists serve to attune behaviour to an actual context of activity, whether that context is favourable or not, preparatory embodied routines actively refer to certain potential (and thus non-actual) contexts of a favourable nature that those routines themselves help to bring about, indicating the possibilities of actions disclosed by the desired context.
... The emerging idea is that existentialist phenomenology might have a positive role to play in revealing phenomena and processes that cognitive science might profitably explore -indeed, that existentialist phenomenology might even become a member of the cognitive-scientific community and benefit from a collaborative engagement with the latter. This idea has also been explored by Wheeler (2005Wheeler ( , 2008Wheeler ( , 2010b in his development and defence of (what he identifies explicitly as) a Heideggerian embodied cognitive science. ...
... Rather, context is something that is always automatically present in that mechanism at the point of triggering. Wheeler (2008) interprets this as a kind of intrinsic context sensitivity that solves (or rather dissolves) the intra-context problem of relevance. So now what about the inter-context problem of relevance? ...
Chapter
This fully revised and updated 2nd edition provides a comprehensive reference guide to existentialism, featuring key chapters on key existentialist thinkers, as well as chapters applying existentialism to subject areas ranging across politics, literature, feminism, religion, the emotions, cognitive science, and poststructuralism. Contemporary developments in the field of existentialism that speak to issues of identity and exclusion are explored in 4 new chapters on race, gender, disability, and technology, whilst the 5th new chapter new chapter outlines analytic philosophy’s complicated relationship to existentialism. Presenting the field of existentialism beyond the European tradition, this edition also includes a new key thinker chapter on Frantz Fanon, alongside Kierkegaard, Nietzsche, Heidegger, Sartre and de Beauvoir, as well as new engagement with the work of scholars on race and existentialism, including Lewis R. Gordon, George Yancy, and Richard Wright. The resources section at the end of the book includes an updated A to Z glossary, and timeline of key events, texts and thinkers in existentialism, as well as a list of relevant organisations, and an annotated guide to further reading, making this 2nd edition an invaluable text for scholars and students alike.
... Lo que al fin de cuentas cuestiona la dificultad de la regresión, es cómo un sistema cognitivo "sabe", después de una búsqueda parcial, lo que es relevante y que, además, "sabe" que la información recolectada ya le es suficiente para llevar a cabo una tarea determinada (Wheeler, 2008). Para comprenderlo más fácilmente imagínese la siguiente situación. ...
Article
Full-text available
La sobrecarga de información resulta un problema epistemológico que preocupa a distintas áreas del conocimiento tales como al campo de las ciencias cognitivas, en su intento por modelar sistemas capaces de detectar relevancia como a la ya olvidada enciclopedística, practicada durante los siglos XVI- XVIII y abocada a la organización, disposición y difusión de la información científica de la época. El presente artículo intenta dar cuenta del fracaso de estos dos campos de investigación a la hora de hallar soluciones ante la problemática de la sobrecarga informacional. Para abordar nuestra cuestión clave, examinaremos particularmente las miradas relativamente pesimistas de H. Dreyfus y de Novalis. Los tratamientos de estos autores demuestran que la sobrecarga de información no puede, ni podrá ser resuelta, si no se recurre a otra vía de investigación por fuera de aquellas que se muestran como insuficientes, a saber: el modelo computacional de la mente (Dreyfus) y la operación enciclopedística clasificatoria (Novalis).
... Yet what is relevant can itself not be determined contextindependently but only relative to the context in which the task is specified. The problem is that the context itself cannot be independently specified, and so a regress arises (Dreyfus, 1992;Wheeler, 2008). Inferring (hidden) beliefs from observable behavior is an abductive inference to the best explanation (Apperly, 2011, p. 118 f.); as such, it is a "best guess", all things considered, that always leaves open the possibility of rival explanations. ...
Article
Full-text available
I put forward an externalist theory of social understanding. On this view, psychological sense making takes place in environments that contain both agent and interpreter. The spatial structure of such environments is social, in the sense that its occupants locate its objects by an exercise in triangulation relative to each of their standpoints. This triangulation is achieved in intersubjective interaction and gives rise to a triadic model of the social mind. This model can then be used to make sense of others’ observed actions. Its possession plays a vital role in the development of the capacity for false belief reasoning. The view offers an integrated account of the development of social cognition from primary intersubjectivity to level-2 perspective taking. It incorporates insights from interactionism and mindreading theories of social cognition and thus offers a way out of the stalemate between defenders of the two views. Because psychological sense making is perspectival, the frame problem does not arise for social reasoners: the perspective they bring to bear on the action that is to be interpreted constrains the information they can select to make sense of what others do.
... suficiente para tomar una decisión determinada (Wheeler, 2008). Este cuestionamiento da cuenta del aspecto epistemológico de nuestro problema de interés; el primer asunto de este aspecto gira en torno al modo en que seleccionamos vasta información, mientras que el segundo cuestiona el requisito de adecuación. ...
Preprint
Full-text available
Los Documentos de Trabajo del IIESS reflejan avances de investigaciones realizadas en el Instituto. Las/los autoras/es son responsables de las opiniones expresadas en los documentos.
... In an informative example, Wittgenstein (1978) describes this integrated engaged responsiveness and lived affective experience. A door is appreciated as too low in its current context by an 11 Switching has been understood by Wheeler (2008) as intra-context sensitivity to relevance, which he explained dynamically. He distinguishes it from outer-context sensitivity to relevance. ...
Chapter
Full-text available
The Oxford Handbook of 4E Cognition provides a systematic overview of the state of the art in the field of 4E cognition: it includes chapters on hotly debated topics, for example, on the nature of cognition and the relation between cognition, perception and action; it discusses recent trends such as Bayesian inference and predictive coding; it presents new insights and findings regarding social understanding including the development of false belief understanding, and introduces new theoretical paradigms for understanding emotions and conceptualizing the interaction between cognition, language and culture. Each thematic section ends with a critical note to foster the fruitful discussion. In addition the final section of the book is dedicated to applications of 4E cognition approaches in disciplines such as psychiatry and robotics. This is a book with high relevance for philosophers, psychologists, psychiatrists, neuroscientists and anyone with an interest in the study of cognition as well as a wider audience with an interest in 4E cognition approaches.
... The FP in its broader formulation appears intractable; we have previously argued that it is in fact intractable on the basis of time complexity [7]. However, general discussion of the FP has remained largely qualitative (e.g., References [8][9][10][11][12][13][14]; see References [15,16] for recent reviews) and the question of its technical tractability in a realistic, uncircumscribed operational setting remains open. Practical solutions to the FP, particularly in robotics, remain heuristic (e.g., References [17][18][19]). ...
Article
Full-text available
The open-domain Frame Problem is the problem of determining what features of an open task environment need to be updated following an action. Here we prove that the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable. We discuss two other open-domain problems closely related to the Frame Problem, the system identification problem and the symbol-grounding problem, and show that they are similarly undecidable. We then reformulate the Frame Problem as a quantum decision problem, and show that it is undecidable by any finite quantum computer.
... Their culture-specific knowledge and experience and their prior knowledge of the target group affect the way they translate. Thus, connectionists tackle the wider epistemological and hermeneutic problem of "framing" (Gadamer, 1972;Wheeler, 2008): They try to explain our ability to take account of a situation, base our interpretations on what is relevant in a specific situation, and limit the scope of information that is (re)considered in the process. In translation studies, Gutt (1991) discusses the problem of endless circles of inferences in terms of relevance, based on Sperber and Wilson's (1986) relevance theory. ...
Article
Cognitive scientific approaches to translation investigate the development and workings of the underlying processes that make the complex cognitive behavior of translation possible. They refer to and expand existing cognitive scientific models of the mind to explain the behavior and choices of translators. Cognitive models have been oriented to the metaphors of the computer (translation as information processing, code switching, and symbol manipulation), neural networks (connectionist models of translation), and sociocognitive theory (translation as situated interaction). Cognitive translation research uses a variety of methods to investigate the activity of translating by different groups of participants with a range of text types and in a variety of contexts. The methods used to date have included, for example, introspection, neurological EEG measurements, theoretical analysis, interviews, think‐aloud protocols, participant observation, screen logging, and eye‐tracking. Recent cognitive approaches to translation emphasize the social organization of translation and the ergonomics of the computer‐supported cooperative work in which complex activities are negotiated. Defining professionalism and expertise has also become one of the current themes in cognitive translation studies. In addition, affective and emotional aspects shift into the focus of attention as essential elements of cognition, and their impact on translation performance is being increasingly investigated.
... suficiente para tomar una decisión determinada (Wheeler, 2008). Este cuestionamiento da cuenta del aspecto epistemológico de nuestro problema de interés; el primer asunto de este aspecto gira en torno al modo en que seleccionamos vasta información, mientras que el segundo cuestiona el requisito de adecuación. ...
Book
Los Documentos de Trabajo del IIESS reflejan avances de investigaciones realizadas en el Instituto. Las/los autoras/es son responsables de las opiniones expresadas en los documentos.
... La prontitud con la que recibimos esa información, y su constante dinamismo, agravan aún más estas circunstancias, puesto que la abrumadora cantidad de información dificulta algunas cuestiones generales, tales como la toma de decisiones, y algunas más particulares como aquellas implicadas en la determinación de relevancia. En efecto, uno de los problemas más importantes y actuales dentro de la investigación cognitiva es aquel que cuestiona cómo los seres humanos determinamos relevancia frente a es suficiente para tomar una decisión determinada (Wheeler, 2008). Este cuestionamiento da cuenta del aspecto epistemológico de nuestro problema de interés; el primer asunto de este aspecto gira en torno al modo en que seleccionamos vasta información, mientras que el segundo cuestiona el requisito de adecuación. ...
Article
Full-text available
Los autores de esta contribución, proponen llevar a cabo un análisis cualitativo a fin de caracterizar las formas en que se desarrollan las dinámicas de asistencia territorial de alimentos y productos de higiene personal y limpieza general en contexto de aislamiento social preventivo y obligatorio, en tres barrios vulnerables de Bahía Blanca. El articulo forma parte del Documento de Trabajo del IIESS: LA INVESTIGACIÓN EN CIENCIAS SOCIALES EN TIEMPOS DE LA PANDEMIA POR COVID-19.
... La prontitud con la que recibimos esa información, y su constante dinamismo, agravan aún más estas circunstancias, puesto que la abrumadora cantidad de información dificulta algunas cuestiones generales, tales como la toma de decisiones, y algunas más particulares como aquellas implicadas en la determinación de relevancia. En efecto, uno de los problemas más importantes y actuales dentro de la investigación cognitiva es aquel que cuestiona cómo los seres humanos determinamos relevancia frente a es suficiente para tomar una decisión determinada (Wheeler, 2008). Este cuestionamiento da cuenta del aspecto epistemológico de nuestro problema de interés; el primer asunto de este aspecto gira en torno al modo en que seleccionamos vasta información, mientras que el segundo cuestiona el requisito de adecuación. ...
Book
Full-text available
Los Documentos de Trabajo del IIESS reflejan avances de investigaciones realizadas en el Instituto. Las/los autoras/es son responsables de las opiniones expresadas en los documentos.
... La prontitud con la que recibimos esa información, y su constante dinamismo, agravan aún más estas circunstancias, puesto que la abrumadora cantidad de información dificulta algunas cuestiones generales, tales como la toma de decisiones, y algunas más particulares como aquellas implicadas en la determinación de relevancia. En efecto, uno de los problemas más importantes y actuales dentro de la investigación cognitiva es aquel que cuestiona cómo los seres humanos determinamos relevancia frente a es suficiente para tomar una decisión determinada (Wheeler, 2008). Este cuestionamiento da cuenta del aspecto epistemológico de nuestro problema de interés; el primer asunto de este aspecto gira en torno al modo en que seleccionamos vasta información, mientras que el segundo cuestiona el requisito de adecuación. ...
Book
Full-text available
A finales de 2019 el mundo comenzó a enfrentarse a una nueva y difícil situación sanitaria: la pandemia por COVID-191. Mientras los sistemas de salud de los diferentes países afectados se enfrentaron a situaciones fuertemente críticas, medidas extremas (cuarentenas, suspensión de actividades, entre otras) se fueron adoptando, a la vez que miles de científicos de todo el planeta comenzaron a abocar sus esfuerzos en encontrar respuestas y soluciones. Actualmente, los ojos del mundo fijan su atención en una potencial vacuna o tratamiento para la enfermedad. Los protocolos, las pruebas, se realizan contra reloj. Mientras tanto, otros laboratorios buscan soluciones a problemas coyunturales y vitales: análisis de materiales de aislamiento, respiradores, sanitizantes, técnicas de transporte, logística, etc., concentran una frenética actividad académica y técnica. En este contexto, la actividad científica de las Ciencias Sociales contribuye enérgicamente a sobrellevar esta coyuntura. Más aún, se transforma en un campo de investigación de vital importancia para el hombre a la hora de considerar las problemáticas sociales, tanto culturales como políticas y económicas, que surgieron y surgirán como consecuencia del aislamiento preventivo, la redefinición de los sistemas de salud nacionales, la crisis económica, los vaivenes institucionales, entre otras. En el Instituto de Investigaciones Económicas y Sociales del Sur (IIESS UNS-CONICET, Bahía Blanca, Argentina) un amplio grupo de investigadores se encuentra abocado al análisis de los tópicos mencionados, en el marco de diferentes proyectos de investigación. A pedido de la Dirección del IIESS, varios de estos investigadores plasman sus avances, desde una visión pluralista e interdisciplinaria, en documentos de trabajo y reflexiones compilados en la presente obra. Nuestro interés es que este documento colectivo sea el primero de una serie cuyo objetivo es contribuir al conocimiento en el contexto de la COVID-19. El libro se inicia con una reflexión de la Dra. Ana María Franchi, Presidenta del CONICET, a propósito de este emprendimiento. Luego, mediante una invitación especial, la Dra. Noemí Girbal-Blacha presenta un ensayo sobre la Institucionalidad y la Pandemia, donde remarca la necesidad de conservar un marco institucional democrático activo, proveedor de reglas de juegos claras para la sociedad.
... It is unclear how that boundary could be learned if the animal does not first start by considering all possible outcomes. Learning to delimit between what is relevant and what is irrelevant is a nontrivial problem and bears resemblance to the philosophical frame problem 1 : In a sufficiently rich environment, there is no tractably identifiable boundary between (1) knowledge that is relevant to a particular context, and thus needs to be updated through learning, and (2) knowledge that is irrelevant to a particular context, and thus can be left alone (Dennett, 2006;Moore, 1981;Pylyshyn, 1987;Wheeler, 2008). ...
Article
Full-text available
A vector-based model of discriminative learning is presented. It is demonstrated to learn association strengths identical to the Rescorla–Wagner model under certain parameter settings (Rescorla & Wagner, 1972, Classical Conditioning II: Current Research and Theory, 2, 64–99). For other parameter settings, it approximates the association strengths learned by the Rescorla–Wagner model. I argue that the Rescorla–Wagner model has conceptual details that exclude it as an algorithmically plausible model of learning. The vector learning model, however, does not suffer from the same conceptual issues. Finally, we demonstrate that the vector learning model provides insight into how animals might learn the semantics of stimuli rather than just their associations. Results for simulations of language processing experiments are reported.
... Naturally, advancements have been made that have extended the range of robotic abilities to adapt to and reflexively respond to environmental changes. This shift towards situated robotics that focus on the behaviors of the current situation in which they find themselves in rather than modeling some end goal or idealized external environment [33,34]. Still, the general practice is to design these AI facilitated programs as software first and evolve them in simulations prior to them being built and tested in their robotic form [30,35]. ...
Article
Full-text available
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.
... Over the years this overarching problem of meaning has been famously discussed in terms of a variety of more specific practical and theoretical problems, including the symbol grounding problem [11], the Chinese room argument [12], and generalizations of the frame problem [13,14]. A decade ago one of us co-authored an article [6] that diagnosed the root cause of this problem of meaning in AI as a lack of precarious self-individuation of artificial agents (i.e., as a lack of life, see also [15][16][17]). ...
Article
Full-text available
In this essay we critically evaluate the progress that has been made in solving the problem of meaning in artificial intelligence (AI) and robotics. We remain skeptical about solutions based on deep neural networks and cognitive robotics, which in our opinion do not fundamentally address the problem. We agree with the enactive approach to cognitive science that things appear as intrinsically meaningful for living beings because of their precarious existence as adaptive autopoietic individuals. But this approach inherits the problem of failing to account for how meaning as such could make a difference for an agent’s behavior. In a nutshell, if life and mind are identified with physically deterministic phenomena, then there is no conceptual room for meaning to play a role in its own right. We argue that this impotence of meaning can be addressed by revising the concept of nature such that the macroscopic scale of the living can be characterized by physical indeterminacy. We consider the implications of this revision of the mind-body relationship for synthetic approaches.
... However, as Clark (2016, p.259) argues, because of the goal of reaching feasible applications, "Implausible implications of pervasive brute optimality are thus abandoned in favour of strategies that deliver some combination of efficacy, reliability, and energetic efficiency". Wheeler (2008) mentions that there is a difference between sensitivity to the current context and crossing knowledge from one context to the other. As Clark (2016, p.141) notes "within the predictive processing paradigm, such context-sensitivity becomes ´[…] pervasive and 'maximal'. ...
Thesis
Full-text available
Dual Process Theory has increasingly gained fame as a framework for explaining evidence in reasoning and decision making tasks. This theory proposes there must be a sharp distinction in thinking to explain two clusters of correlational features. One cluster describes a fast and intuitive process (Type 1), while the other describes a slow and reflective one (Type 2), (see Evans, 2008; Evans & Stanovich, 2013; Kahneman, 2011). However, as Samuels (2009) has noted, there is a problem of determining why these group of features form clusters, more than what the labels Type (or system) 1 and 2 can capture, the unity problem. We understand there might be differences in the processing architecture that grounds each type of process, thus requiring distinct cognitive frameworks for each. We argue that the predictive processing approach (as held by Hohwy, 2013 and Clark, 2016) is a more suitable framework for Type 1 processing. Such an approach proposes cognition is in the job of attempting to predict what will perturb sensory inputs next. These are not personal predictions but rather multiple sub-personal predictions that even the visual system makes at various layers at each millisecond that passes. Rather than being based on a symbolic representation of each aspect of the world, these predictions are made on the basis of statistical information updated moment by moment. This statistical content tracks previous sensory states and the causes of these previous sensory states. Kahneman (2011) has been arguing that there is a link between perception and Type 1 processing. What we hold is that such link obtains because Type 1 judgments actually are predictions stemming from higher layers of perceptual systems which work by means of predictive processing. On the other hand, we propose such architecture does not handle Type 2 processes. Rather, these seem to be based on classical symbol systems executing heuristic search as explained by Newell (1980). In conclusion, we propose a dual framework is necessary for explaining why there are two clusters of features. Such a framework would include predictive processing for explaining Type 1 processing and computations on symbolic representations for Type 2 processing.
... We believe this complex modulatory behavior in conjunction with the generative model is the key to understanding "background coping" or "affordances on the horizon" [15] [56] [81], the nuanced milieu within which we effortlessly traverse the environment and skillfully cope with the familiar external surroundings in order to fulfill our internal needs. Further research will be required to tease apart the bodily contribution of affective states and their influence on the operations of inference based perception, action, and learning. ...
Article
Full-text available
In this paper, we are interested in the open-ended development of adaptive behavior by infant humans in the context of embodied, enactive cognitive science. We focus on the sensorimotor development of an infant child from gestation to toddler and discuss what aspects of the body, brain, and environment could allow for the sort of leaps of complexity observed in the developing infant that has heretofore not been replicable by artificial means. We use the backdrops of Piagetian developmental principles and Sensorimotor Contingency Theory to discuss this process in terms of skill proficiency, and discuss biologically plausible means for achieving it by referring to predictive processing and the free energy principle. We also refer to the theory of affordances to examine the selection of appropriate behaviors in a complex environment, and investigate phenomenological accounts to discuss the intentionality inherent in the purposeful behaviors that develop. Throughout this paper we develop a functional account of infant development which is based on the aforementioned theories and which leads to a biologically realistic explanation for the theory laid out by Piaget consistent with the embodied and enactive views.
... 3 See Wheeler (2008) for a response to Dreyfus. 4 See Glazebrook (2000) for a discussion of historical shifts in Heidegger"s thought about science. ...
... The set of the relevant contingencies that might affect our simplest decisions can neither be fully represented nor reduced to a smaller set of elements, if we want to preserve the sensitivity to the real context of our intelligent processes. This is the well-known philosophical version of the "frame problem" of artificial intelligence (Dreyfus 1992;Wheeler 2008): the persistent difficulty to analytically spell out the determining factors behind any rational procedure, due to the impossibility to exhaustively list all (and only) the context-sensitive variables that would apply a priori to a certain procedure-based decision. As the frame problem threatens any system that derives its intelligence from representation, it is not just a theoretical impasse for the cognitivist dream to build thinking machines based on internal models of the world and rules of thumb; it is also a serious difficulty for the rationalistic approaches to social cognition, insofar as they presuppose an attribution of rationality to the agent's decisional processes according to an abstractly universal standard of objective economy and parsimony. ...
Article
Full-text available
I consider two distinct deflationary theories in social cognition that aim to explain action understanding without demanding meta-representational or mindreading processes: the first one is the 'teleological stance hypothesis' (TSH), claiming that we infer the intended goal of a certain observed action based on the mere perception of its effects and of its situational constraints; I decided to dub the second one 'the embodied familiarity hypothesis' (EFH) to comprise all the theories claiming that we recognize the intended goal of a certain action based on the perceptual or motoric expertise developed within the sensorimotor contingencies associated to that action's context. TSH's main requirement is that the observer could ascribe efficiency, and therefore rationality, to the observed agent's movement, while EFH's main requirement is that the observer were somehow exposed to the perceptual or motoric details of the observed agent's action. I argue that EFH describes a more primitive and fundamental form of action understanding, i.e. one that is necessarily presupposed by TSH: in fact, while recognizing efficiency is neither a necessary nor a sufficient condition for detecting goal-relatedness, some kind of perceptual or motoric familiarity with the details of the observed action's context is always necessary for any ascription of efficiency, and therefore of rationality, to the observed agent. I conclude that, while TSH might certainly be effective in describing certain rational forms of action understanding, it implicitly requires EFH to be true, as its inferential system would be groundless without an assumed familiar background of embodied expertise.
... Wheeler (2008) characterizes this as the 'inter-contextual' dimension to the frame problem, which is the challenge of saying 'how a purely mechanistic system might achieve appropriate, flexible and fluid action in worlds in which adaptation to new contexts is open-ended and in which the number of potential contexts is indeterminate' (p.340). Wheeler's hypothesis that the problem is overcome in systems that exhibit 'continuous reciprocal causation' is close to the present standpoint. ...
Article
Full-text available
To understand the mind and its place in Nature is one of the great intellectual challenges of our time, a challenge that is both scientific and philosophical. How does cognition influence an animal's behaviour? What are its neural underpinnings? How is the inner life of a human being constituted? What are the neural underpinnings of the conscious condition? This book approaches each of these questions from a scientific standpoint. But it contends that, before we can make progress on them, we have to give up the habit of thinking metaphysically, a habit that creates a fog of philosophical confusion. From this post-reflective point of view, the book argues for an intimate relationship between cognition, sensorimotor embodiment, and the integrative character of the conscious condition. Drawing on insights from psychology, neuroscience, and dynamical systems, it proposes an empirical theory of this three-way relationship whose principles, not being tied to the contingencies of biology or physics, are applicable to the whole space of possible minds in which humans and other animals are included. The book provides a joined-up theory of consciousness.
... Wheeler explains the idea in the following way in discussing the related example of cricket phonotaxis: …the cricket's special purpose mechanism, in the very process of being activated by a specific environmental trigger, brings a context of activity along with it, implicitly realised in the very operating principles which define that mechanism's successful functioning. (Wheeler 2008: 335) The fly-snapping mechanism only works (i.e., it only results in the frog catching a fly) when it fires in response to small black moving objects. Built into the mechanism's operating principles is the context in which the mechanism functions (i.e., a context in which there are black moving objects present). ...
Article
Full-text available
Following a brief reconstruction of Hutto & Satne’s paper we focus our critical comments on two issues. First we take up H&S’s claim that a non-representational form of ur-intentionality exists that performs essential work in setting the scene for content-involving forms of intentionality. We will take issue with the characterisation that H&S give of this non-representational form of intentionality. Part of our commentary will therefore be aimed at motivating an alternative account of how there can be intentionality without mental content, which we have called skilled intentionality. Skilled intentionality is the individual’s selective openness and responsiveness to a rich landscape of affordances. A second issue we take up concerns the distinction between ur-intentionality and content-involving intentionality. We will argue that our notion of skilled intentionality as it is found in humans cuts across these two categories. Instead of distinguishing between different forms of intentionality we recommend focusing on how skilled intentionality takes different forms in different forms of life.
Preprint
Full-text available
The article analyses foundational principles relevant to the creation of artificial general intelligence (AGI). Intelligence is understood as the ability to create novel skills that allow to achieve goals under previously unknown conditions. To this end, intelligence utilises reasoning methods such as deduction, induction and abduction as well as other methods such as abstraction and classification to develop a world model. The methods are applied to indirect and incomplete representations of the world, which are obtained through perception, for example, and which do not depict the world but only correspond to it. Due to these limitations and the uncertain and contingent nature of reasoning, the world model is constructivist. Its value is functionally determined by its viability, i.e., its potential to achieve the desired goals. In consequence, meaning is assigned to representations by attributing them a function that makes it possible to achieve a goal. This representational and functional conception of intelligence enables a naturalistic interpretation that does not presuppose mental features, such as intentionality and consciousness, which are regarded as independent of intelligence. Based on a phenomenological analysis, it is shown that AGI can gain a more fundamental access to the world than humans, although it is limited by the No Free Lunch theorems, which require assumptions to be made.
Chapter
According to dominant views in affective computing, artificial systems e.g. robots and algorithms cannot experience emotion because they lack the phenomenological aspect associated with emotional experience. In this paper I suggest that if we wish to design artificial systems such that they are able to experience emotion states with phenomenal properties we should approach artificial phenomenology by borrowing insights from the concept of ‘attunement to the world’ introduced by early phenomenologists. This concept refers to an openness to the world, a connection with the world which rejects the distinction between an internal mind and the external world. Early phenomenologists such as Heidegger, consider this ‘attunement’ necessary for the experience of affective states. I argue that, if one accepts that the phenomenological aspect is part of the emotion state and that ‘attunement to the world’ is necessary for experiencing emotion, affective computing should focus on designing artificial systems which are ‘attuned to the world’ in the phenomenological sense to enable them to experience emotion. Current accounts of the phenomenal properties of affective states, analyse them in terms of specific types of representations. As artificial systems lack a capability for such representation mainly because of an inability to determine relevance in changing contexts (‘the frame problem’), artificial phenomenology is impossible. I argue that some affective states, such as ‘attunement’ are not necessarily representational and as such a lack of capacity for representation does not imply that artificial phenomenology is impossible. At the same time ‘attunement’ helps restrict some aspects of the ‘frame problem’ and as such, goes some way of enabling representational states such as emotion.KeywordsArtificial systemsPhenomenologyRepresentationalismArtificial emotion Attunement
Article
Full-text available
What is it about our current digital technologies that seemingly makes it difficult for users to attend to what matters to them? According to the dominant narrative in the literature on the “attention economy,” a user’s lack of attention is due to the large amounts of information available in their everyday environments. I will argue that information-abundance fails to account for some of the central manifestations of distraction, such as sudden urges to check a particular information-source in the absence of perceptual information. I will use active inference, and in particular models of action selection based on the minimization of expected free energy, to develop an alternative answer to the question about what makes it difficult to attend. Besides obvious adversarial forms of inference, in which algorithms build up models of users in order to keep them scrolling, I will show that active inference provides the tools to identify a number of problematic structural features of current digital technologies: they contain limitless sources of novelty, they can be navigated by very simple and effortless motor movements, and they offer their action possibilities everywhere and anytime independent of place or context. Moreover, recent models of motivated control show an intricate interplay between motivation and control that can explain sudden transitions in motivational state and the consequent alteration of the salience of actions. I conclude, therefore, that the challenges users encounter when engaging with digital technologies are less about information overload or inviting content, but more about the continuous availability of easily available possibilities for action.
Article
This paper is an attempt to understand expanding information spaces from a phenomenological perspective. As technology continues to challenge the online/offline distinction, phenomenology provides a useful framework for thinking about context, the role of situated being, and the need for order. Artificial intelligence and context-aware computing are used as examples of information environments that specifically call out the benefits of understanding information as a bodily entity—as in a “body of information” or “body of knowledge”. Concentrating on a Heideggerian approach to technology, which in part characterizes technology as a call for order and structure, the essay will examine the idea of ‘structured flexibility’ needed for systems that not only process information but also predict needs, shape information contexts, and actively engage in the user-system interaction. Finally, it will provide new ways for information architects to think about the expanding space of information
Chapter
The Cambridge Handbook of Computational Cognitive Sciences is a comprehensive reference for this rapidly developing and highly interdisciplinary field. Written with both newcomers and experts in mind, it provides an accessible introduction of paradigms, methodologies, approaches, and models, with ample detail and illustrated by examples. It should appeal to researchers and students working within the computational cognitive sciences, as well as those working in adjacent fields including philosophy, psychology, linguistics, anthropology, education, neuroscience, artificial intelligence, computer science, and more.
Article
Full-text available
In this paper, I provide an overview of today’s philosophical approaches to the problem of “intelligence” in the field of artificial intelligence by examining several important papers on phenomenology and the philosophy of biology such as those on Heideggerian AI, Jonas's metabolism model, and slime mold type intelligence.
Chapter
What methods are used by phenomenologists? This chapter explains the ‘natural attitude’ and reviews the classic methodological steps involved in doing phenomenology, moving from the natural attitude to a phenomenological and transcendental attitude: the epoché, the phenomenological reduction, and eidetic variation. We can add to these methods others that have been developed as a way of naturalizing phenomenology: using mathematics to formalize phenomenology, using neurophenomenology in experimental settings, front-loading phenomenology, and microphenomenological interviews. The chapter concludes with a suggestion for using simulation and computer modeling, as well as evolutionary robotics, as a type of eidetic variation, to address complex phenomena.
Chapter
Full-text available
In Creative Evolution (1907/1911), a pivotal discussion is the extreme complexity of instinctual behavior. As one of many examples, a member of the Hymenoptera “knows” precisely the three locations of motor-neuron complexes at which to sting a cricket such that it is paralyzed, yet remains fully alive for the wasp’s larvae. Two points: (a) This behavior is as much an “irreducible” complex of acts as the structural organization of the wasp’s body, and just as inexplicably formed by natural selection, and (b) the instinctual behavior is actually at the same level as the vital processes of the organism. This is to say that any theory of evolution, be it selection, self-assembly, or self-organization, is equally bound to address not only the origin problem of an organism’s structure, but the correlated functional problem of instinct. Instinct, however, was Bergson’s prime source for holding, firstly, that we must see Consciousness as the impetus behind evolution and secondly that it is only by utilizing the essence of instinct, conjoined with intellect—his “intuition”—that mind and science can penetrate these mysterious evolutionary processes. This double thesis of the role of Consciousness and the role of intuition likely helped to cause Bergson’s neglect in the biological world, but subsequently there has emerged the current sharp awareness of the “Hard Problem” of Consciousness (Chalmers, J Conscious Stud 2:200–219, 1995). The ongoing failure on a solution to this problem—its very, very unresolved status—should give us pause. In fact, integral to the argument of Creative Evolution, though always only obliquely referenced, was Matter and Memory (1896/1912), and in this work was a remarkable solution to the Hard Problem—when understood, an amazing feat of “intuition.” This, we will see, casts Bergson’s view of the role of Consciousness in evolution, and the nature of instinct as one of evolution’s lines of development, in a new light.
Article
Full-text available
At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, metaphysical and ethical questions arising by this project, as well as the realistic and pragmatic demands that dominate artificial intelligence and robotics research programs. Most of all, however, it sheds light upon the contribution of Susan and Michael Anderson regarding the introduction and undertaking of a main objective related to the creation of ethical autonomous agents, that will not be based on the “imperfect” patterns of human behavior, or on preloaded hierarchical laws and human-centric values.
Article
Full-text available
In discussion on consciousness and the hard problem, there is an unquestioned background assumption, namely, our experience is stored in the brain. Yet Bergson (in: Matter and memory. Zone Books, New York, 1896/1991) argued that this very question, “Is experience stored in the brain?” is the critical issue in the problem of consciousness. His examination of then-current memory research led him, save for motor or procedural memory, to a “no” answer. Others, for example Sheldrake (in: Science set free. Random House, New York, 2012), have continued this negative assessment of the research findings. So, has this assumption actually been proven since Bergson? Do we know how experience is stored? Or that it is stored? Here, a recent review and model of memory is examined to see where this assumption actually stands. Again, the assessment will be that nothing has changed. The core of the problem, it will be argued, lies in two things: Firstly, the search for how/where experience is stored is motivated—rephrasing Bergson—in the classic metaphysic, a framework on space and time whose logic cannot be coherently, logically adhered to in attempting to explain how experience is stored. Secondly, the search generally assumes an inadequate theory of perception that is implicitly based in this classic metaphysic. If framed within Bergson’s model of perception and his temporal metaphysic, conjoined with J. J. Gibson’s model, the storage-search appears misguided from the start.
Article
Full-text available
El problema de marco, el cual que cuestiona el modo en que los seres humanos determinamos relevancia eficientemente, ha sido considerado por algunos filósofos como un obstáculo para el progreso de las Ciencias Cognitivas (Dreyfus, 2007). En este trabajo, cuestionamos tal pesimismo estimando la aptitud de las emociones para resolver una de sus principales dificultades, a saber, la dificultad de la regresión. Luego de dilucidar la dificultad mencionada, propondremos dos grandes posiciones frente a su resolución: aquella que considera el éxito epistémico de las emociones al resolverla (Elgin, 1996; Hookway 2003 y de Sousa, 1980) y aquella que, más prudentemente, estima sus limitaciones (Wild y Dohrn, 2008). Sostendremos que si bien la función de saliencia-relevancia de las emociones parece ofrecer una solución a la dificultad de la regresión, y con ello al aspecto epistemológico del problema de marco, esto en realidad no es así, puesto que lejos de resolver tal dificultad, la desplazan hacia otros problemas pendientes de solución.
Article
Full-text available
The emerging neurocomputational vision of humans as embodied, ecologically embedded, social agents—who shape and are shaped by their environment—offers a golden opportunity to revisit and revise ideas about the physical and information-theoretic underpinnings of life, mind, and consciousness itself. In particular, the active inference framework (AIF) makes it possible to bridge connections from computational neuroscience and robotics/AI to ecological psychology and phenomenology, revealing common underpinnings and overcoming key limitations. AIF opposes the mechanistic to the reductive, while staying fully grounded in a naturalistic and information-theoretic foundation, using the principle of free energy minimization. The latter provides a theoretical basis for a unified treatment of particles, organisms, and interactive machines, spanning from the inorganic to organic, non-life to life, and natural to artificial agents. We provide a brief introduction to AIF, then explore its implications for evolutionary theory, ecological psychology, embodied phenomenology, and robotics/AI research. We conclude the paper by considering implications for machine consciousness.
Article
Martin Heidegger is a towering figure in the history of continental philosophy, but his work has recently been brought into productive engagement with analytic philosophy. This paper introduces and explores two channels along which such engagement has been taking place. The first is in metaphysics, where Heideggerian thought has been interpreted either as making the metaphysical concept of being literally senseless or as mandating a revision to classical logic. The second is in philosophy of mind, and more particularly in philosophy of cognitive science, where Heideggerian thought has been used to mount a challenge to representational theories of mind. © 2016 The Author(s) Philosophy Compass
Chapter
Merleau-Ponty’s description of Cezanne’s working process reveals two things: first, cognition arises on the basis of perception and action, and, second, cognition arises out of frustration, when an agent confronts non-sense. We briefly present the history of the domain of philosophy and psychology that has claimed that perception-action comes before cognition, especially the work of Merleau-Ponty, Gibson, and Heidegger. We then present an experimental paradigm “front-loading” the Heideggerian phenomenology of encountering tools. The experiments consisted of a dynamical perception-action task and a cognitive task. The results reinforce the distinction between tools being experienced as ready-to-hand and turning into unready- or present-at-hand when sense-ma kin g was thwarted. A more cognitive attitude towards the task emerged when participants experienced non-sense. We discuss implica- tions of this for the movement sciences.
Chapter
Attempts to engineer a generally intelligent artificial agent have yet to meet with success, largely due to the (intercontext) frame problem. Given that humans are able to solve this problem on a daily basis, one strategy for making progress in AI is to look for disanalogies between humans and computers that might account for the difference. It has become popular to appeal to the emotions as the means by which the frame problem is solved in human agents. The purpose of this paper is to evaluate the tenability of this proposal, with a primary focus on Dylan Evans’ search hypothesis and Antonio Damasio’s somatic marker hypothesis. I will argue that while the emotions plausibly help solve the intracontext frame problem, they do not function to solve or help solve the intercontext frame problem, as they are themselves subject to contextual variability.
Article
Full-text available
In their exhaustive study of the cognitive operation of analogy (Surfaces and Essences, 2013), Hofstadter and Sander arrive at a paradox: the creative and inexhaustible production of analogies in our thought must derive from a “reminding” operation based upon the availability of the detailed totality of our experience. Yet the authors see no way that our experience can be stored in the brain in such detail nor do they see how such detail could be accessed or retrieved such that the innumerable analogical remindings we experience can occur. Analogy creation, then, should not be possible. The intent here is to sharpen and deepen our understanding of the paradox, emphasizing its criticality. It will be shown that the retrieval problem has its origins in the failure of memory theory to recognize the actual dynamic structure of events (experience). This structure is comprised of invariance laws as per J. J. Gibson, and this event “invariance structure” is exactly what supports Hofstadter and Sander’s missing mechanism for analogical reminding. Yet these structures of invariants, existing only over optical flows, auditory flows, haptic flows, etc., are equally difficult to imagine being stored in a static memory, and thus only exacerbate the problem of the storage of experience in the brain. A possible route to the solution of this dilemma, based in the radical model of Bergson, is also sketched.
Chapter
To function as an intelligent surrogate for its owner, the smartdata agent must be capable of context dependent information processing. Not only does this require that the agent’s behaviour be flexible and fluid but it’s adaptation to new contexts must be open-ended since the number of potential contexts to which it is exposed is indeterminate and possibly infinite. Two types of context are distinguished, positivistic and phenomenological. It is argued that in both types of context, scale free dynamics is required for context dependent information processing. Wild dynamics, a type of scale free dynamics, has characteristics that would allow adaptation to new context to be open-ended and its implementation could be used as a constraint in the evolution of smartdata agents.
Article
The hypothesis of the Extended Cognition (ExCog), formulated by Clark and Chalmers (1998), aims to be a bold and new hypothesis about realisers of cognitive processes. It claims that sometimes cognitive processes extend above the limits of the skin and skull and include chunks of the environment as their partial realisers. One of the most pursuassive arguments in support of this assertion is the famous "parity argument" which calls upon functional similarities between extended cognitive processes and relevant internal processes. This very kind of reasoning gave rise to several arguments against ExCog by way of comparing it to functionalism about the mental, which conclude that ExCog must be trivial, radical or unjustified. In this paper ExCog and the underlying parity principle will be defended against four different kinds of "functionalist" arguments. It will be argued that ExCog can be justified as a special form of functionalism, that it is not trivial nor entailed by the known versions of functionalism, and that the accusation of it being too radical is unwarranted.
Article
Full-text available
Airborne insects are miniature wing-flapping aircraft the visually guided manoeuvres of which depend on analogue, `fly-by-wire' controls. The front-end of their visuomotor system consists of a pair of compound eyes which are masterpieces of integrated optics and neural design. They rely on an array of passive sensors driving an orderly analogue neural network. We explored in concrete terms how motion-detecting neurons might possibly be used to solve navigational tasks involving obstacle avoidance in a creature whose wings are exquisitely guided by eyes with a poor spatial resolution. We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot's heading in real time via a local and intermittent visuomotor feedback loop. This paper shows that the understanding of some invertebrate sensory-motor systems has now reached a level able to provide valuable design hints. Our approach brings into prominence the mutual constraints in the designs of a sensory and a motor system, in both living and non-living ambulatory creatures.
Article
Full-text available
Airborne insects are miniature wing-flapping aircraft the visually guided manoeuvres of which depend on analogue, 'fly-by-wire' controls. The front-end of their visuomotor system consists of a pair of compound eyes which are masterpieces of integrated optics and neural design. They rely on an array of passive sensors driving an orderly analogue neural network. We explored in concrete terms how motion-detecting neurons might possibly be used to solve navigational tasks involving obstacle avoidance in a creature whose wings are exquisitely guided by eyes with a poor spatial resolution. We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot's heading in real time via a local and intermittent visuomotor feedback loop. This paper shows that the understanding of some invertebrate sensory-motor systems has now reached a level able to provide valuable design hints. Our approach brings into prominence the mutual constraints in the designs of a sensory and a motor system, in both living and non-living ambulatory creatures.
Article
Full-text available
This paper introduces a new type of artificial neural network (GasNets) and shows that it is possible to use evolutionary computing techniques to find robot controllers based on them. The controllers are built from networks inspired by the modulatory effects of freely diffusing gases, especially nitric oxide, in real neuronal networks. Evolutionary robotics techniques were used to develop control networks and visual morphologies to enable a robot to achieve a target discrimination task under very noisy lighting conditions. A series of evolutionary runs with and without the gas modulation active demonstrated that networks incorporating modulation by diffusing gases evolved to produce successful controllers considerably faster than networks without this mechanism. GasNets also consistently achieved evolutionary success in far fewer evaluations than were needed when using more conventional connectionist style networks. 1 Introduction 1.1 Robots Over the past decade there has been renewe...
Article
Full-text available
This paper discusses the relationship between Artificial Intelligence (AI) and Artificial Life (A-Life). A-Life research addresses a wide range of phenomena, some of which have no obvious bearing on AI research. The work most relevant to AI is sufficiently coherent and distinct that it is best referred to by its own name: it is Adaptive Behavior research which is most likely to have significant impact on issues traditionally studied in AI. Some motivations for adaptive behavior research are reviewed, and some of the differences between adaptive behavior and traditional AI are discussed. One significant feature of current adaptive behavior research is a focus on relatively simple and specialised cognitive functions, an approach which invites unfavourable comparisons with the "blocksworld" simplified domains which were popular in AI research of the early 1970's. However, such comparisons usually overlook fundamental differences between the blocksworld-AI and Adaptive Behavior approaches ...
Book
This study synthesizes current information from the various fields of cognitive science in support of a new and exciting theory of mind. Most psychologists study horizontal processes like memory and information flow; Fodor postulates a vertical and modular psychological organization underlying biologically coherent behaviors. This view of mental architecture is consistent with the historical tradition of faculty psychology while integrating a computational approach to mental processes. One of the most notable aspects of Fodor's work is that it articulates features not only of speculative cognitive architectures but also of current research in artificial intelligence. Bradford Books imprint
Chapter
August 8-12, 1994, Brighton, England From Animals to Animats 3 brings together research intended to advance the front tier of an exciting new approach to understanding intelligence. The contributors represent a broad range of interests from artificial intelligence and robotics to ethology and the neurosciences. Unifying these approaches is the notion of "animat"—an artificial animal, either simulated by a computer or embodied in a robot, which must survive and adapt in progressively more challenging environments. The 58 contributions focus particularly on well-defined models, computer simulations, and built robots in order to help characterize and compare various principles and architectures capable of inducing adaptive behavior in real or artificial animals. Topics Include Individual and collective behavior • Neural correlates of behavior • Perception and motor control • Motivation and emotion • Action selection and behavioral sequences • Ontogeny, learning, and evolution • Internal world models and cognitive processes • Applied adaptive behavior • Autonomous robots • Heirarchical and parallel organizations • Emergent structures and behaviors • Problem solving and planning • Goal-directed behavior • Neural networks and evolutionary computation • Characterization of environments Bradford Books imprint
Book
Two psychologists, a computer scientist, and a philosopher have collaborated to present a framework for understanding processes of inductive reasoning and learning in organisms and machines. Theirs is the first major effort to bring the ideas of several disciplines to bear on a subject that has been a topic of investigation since the time of Socrates. The result is an integrated account that treats problem solving and induction in terms of rule­based mental models. Bradford Books imprint
Book
In Reconstructing the Cognitive World, Michael Wheeler argues that we should turn away from the generically Cartesian philosophical foundations of much contemporary cognitive science research and proposes instead a Heideggerian approach. Wheeler begins with an interpretation of Descartes. He defines Cartesian psychology as a conceptual framework of explanatory principles and shows how each of these principles is part of the deep assumptions of orthodox cognitive science (both classical and connectionist). Wheeler then turns to Heidegger's radically non-Cartesian account of everyday cognition, which, he argues, can be used to articulate the philosophical foundations of a genuinely non-Cartesian cognitive science. Finding that Heidegger's critique of Cartesian thinking falls short, even when supported by Hubert Dreyfus's influential critique of orthodox artificial intelligence, Wheeler suggests a new Heideggerian approach. He points to recent research in "embodied-embedded" cognitive science and proposes a Heideggerian framework to identify, amplify, and clarify the underlying philosophical foundations of this new work. He focuses much of his investigation on recent work in artificial intelligence-oriented robotics, discussing, among other topics, the nature and status of representational explanation, and whether (and to what extent) cognition is computation rather than a noncomputational phenomenon best described in the language of dynamical systems theory. Wheeler's argument draws on analytic philosophy, continental philosophy, and empirical work to "reconstruct" the philosophical foundations of cognitive science in a time of a fundamental shift away from a generically Cartesian approach. His analysis demonstrates that Heideggerian continental philosophy and naturalistic cognitive science need not be mutually exclusive and shows further that a Heideggerian framework can act as the "conceptual glue" for new work in cognitive science. Bradford Books imprint
Book
Available again, an influential book that offers a framework for understanding visual perception and considers fundamental questions about the brain and its functions. David Marr's posthumously published Vision (1982) influenced a generation of brain and cognitive scientists, inspiring many to enter the field. In Vision, Marr describes a general framework for understanding visual perception and touches on broader questions about how the brain and its functions can be studied and understood. Researchers from a range of brain and cognitive sciences have long valued Marr's creativity, intellectual power, and ability to integrate insights and data from neuroscience, psychology, and computation. This MIT Press edition makes Marr's influential work available to a new generation of students and scientists. In Marr's framework, the process of vision constructs a set of representations, starting from a description of the input image and culminating with a description of three-dimensional objects in the surrounding environment. A central theme, and one that has had far-reaching influence in both neuroscience and cognitive science, is the notion of different levels of analysis—in Marr's framework, the computational level, the algorithmic level, and the hardware implementation level. Now, thirty years later, the main problems that occupied Marr remain fundamental open problems in the study of perception. Vision provides inspiration for the continuing efforts to integrate knowledge from cognition and computation to understand vision and the brain.
Chapter
The notion of modularity, introduced by Noam Chomsky and developed with special emphasis on perceptual and linguistic processes by Jerry Fodor in his important book The Modularity of Mind, has provided a significant stimulus to research in cognitive science. This book presents essays in which a diverse group of philosophers, linguists, psycholinguists, and neuroscientists—including both proponents and critics of the modularity hypothesis—address general questions and specific problems related to modularity. Bradford Books imprint
Book
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
In the early 1950s, as calculating machines were coming into their own, a few pioneer thinkers began to realise that digital computers could be more than number-crunchers. At that point two opposed visions of what computers could be, each with its correlated research programme, emerged and struggled for recognition. One faction saw computers as a system for manipulating mental symbols; the other, as a medium for modelling the brain. One sought to use computers to instantiate a formal representation of the world; the other, to simulate the interactions of neurons. One took problem solving as its paradigm of intelligence; the other, learning. One utilised logic; the other, statistics. One school was the heir to the rationalist, reductionist tradition in philosophy; the other viewed itself as idealised, holistic neuroscience.
Article
Introduction, 99. — I. Some general features of rational choice, 100.— II. The essential simplifications, 103. — III. Existence and uniqueness of solutions, 111. — IV. Further comments on dynamics, 113. — V. Conclusion, 114. — Appendix, 115.
Chapter
The frame problem is a difficulty that arises when formal logic is used to represent the effects of actions. The challenge is to avoid having to represent explicitly large numbers of common-sense facts about what does not change when an action occurs.
Article
David Marr provided a useful framework for theorizing about cognition within classical, AI-style cognitive science, in terms of three levels of description: the levels of (i) cognitive function, (ii) algorithm and (iii) physical implementation. We generalize this framework: (i) cognitive state transitions, (ii) mathematical/functional design and (iii) physical implementation or realization. Specifying the middle, design level to be the theory of dynamical systems yields a nonclassical, alternative framework that suits (but is not committed to) connectionism. We consider how a brain's (or a network's) being a dynamical system might be the key both to its realizing various essential features of cognition — productivity, systematicity, structure-sensitive processing, syntax — and also to a non-classical solution of (frame-type) problems plaguing classical cognitive science.
Conference Paper
In this paper, I propose a neo-Heideggerian framework for A-Life. Following an explanation of some key Heideggerian ideas, I endorse the view that persistent problems in orthodox cognitive science result from a commitment to a Cartesian subject-object divide. Heidegger rejects the primacy of the subject-object dichotomy; and I set about the task of showing how, by adopting a Heideggerian view, A-Life can avoid the problems that have plagued cognitive science. This requires that we extend the standard Heideggerian framework by introducing the notion of a biological background, a set of evolutionary determined practices which structure the norms of animal worlds. I argue that optimality/ESS models in behavioural ecology provide a set of tools for identifying these norms, and, to secure this idea, I defend a form of adaptationism against enactivist worries. Finally, I show how A-Life can assist in the process of mapping out biological backgrounds, and how recent dynamical systems approaches in A-Life fit in with the neo-Heideggerian conceptual framework.
Conference Paper
Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of com- puter architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligence has pro­ duced new models of intelligence that are much closer in spirit to biological systems. The non- Von Neumann computational models they use share many characteristics with biological com­ putation,
Article
Hubert L. Dreyfus, used Heidegger as a guide for the whole AI (artificial intelligence) program at MIT. He introduced Heidegger's non-representational account of the absorption of Dasein (human being) in the world. He also explained that Heidegger distinguished two modes of being, the readiness-to-hand of equipment when we are involved in using it, and the presence-at-hand of objects when we contemplate them. In his 1925 course, Logic: The Question of Truth, Heidegger describes the most basic experience of what he later calls 'pressing into possibilities' not as dealing with the desk, the door, the lamp, the chair and so forth, but as directly responding to a 'what for'. According to Heidegger, every act of having things in front of oneself and perceiving them is held within the disclosure of those things, a disclosure that things get from a primary meaningfulnesss in terms of the what-for.
Article
This paper presents an analysis of an artificially evolved dynamical network-based control system for a simulated autonomous mobile robot engaged in simple visually guided tasks.
Book
The notion of bounded rationality was initiated in the 1950s by Herbert Simon; only recently has it influenced mainstream economics. In this book, Ariel Rubinstein defines models of bounded rationality as those in which elements of the process of choice are explicitly embedded. The book focuses on the challenges of modeling bounded rationality, rather than on substantial economic implications. In the first part of the book, the author considers the modeling of choice. After discussing some psychological findings, he proceeds to the modeling of procedural rationality, knowledge, memory, the choice of what to know, and group decisions.In the second part, he discusses the fundamental difficulties of modeling bounded rationality in games. He begins with the modeling of a game with procedural rational players and then surveys repeated games with complexity considerations. He ends with a discussion of computability constraints in games. The final chapter includes a critique by Herbert Simon of the author's methodology and the author's response. The Zeuthen Lecture Book series is sponsored by the Institute of Economics at the University of Copenhagen.
Article
"I shall speak of ghost, of flame, and of ashes." These are the first words of Jacques Derrida's lecture on Heidegger. It is again a question of Nazism—of what remains to be thought through of Nazism in general and of Heidegger's Nazism in particular. It is also "politics of spirit" which at the time people thought—they still want to today—to oppose to the inhuman. "Derrida's ruminations should intrigue anyone interested in Post-Structuralism. . . . . This study of Heidegger is a fine example of how Derrida can make readers of philosophical texts notice difficult problems in almost imperceptible details of those texts."—David Hoy, London Review of Books "Will a more important book on Heidegger appear in our time? No, not unless Derrida continues to think and write in his spirit. . . . Let there be no mistake: this is not merely a brilliant book on Heidegger, it is thinking in the grand style."—David Farrell Krell, Research in Phenomenology "The analysis of Heidegger is brilliant, provocative, elusive."—Peter C. Hodgson, Religious Studies Review
Article
Marr's demonstrations that retinal receptive field geometry could be derived by Fourier transformation of spatial frequency sensitivity data, that edges and contours could be detected by finding zero crossings in the light gradient by taking the Laplacian or second directional derivative, that excitatory and inhibitory receptive fields could be constructed from "DOG" functions (the difference of two Gaussians), and that the visual system used a two-dimensional convolution integral with a Gaussian prefilter as an operator for bandwidth optimation on the retinal light distribution, were more powerful than anything that had been seen up to that time. It was as if vision research suddenly acquired its own Principia Mathematica, or perhaps General Relativity Theory, in terms of the new explanatory power Marr's theories provided. Truly an extraordinary book from an extraordinary thinker in the area of perception, vision, and the brain.
  • Dreyfus H. L.
‘Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian
  • H L Dreyfus
  • P Husbands
  • O Holland
  • M Wheeler
Better Living through Chemistry: Evolving GasNets for Robot Control
  • P Husbands
  • T Smith
  • N Jakobi
  • M Shea