Article

Vehicles: Experiments in Synthetic Psychology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Conversely, our toolchain frees the researcher from the constrictions of the hardcoding approach described above, by enabling neural simulators to be connected to any of the myriad robotic simulators implementing a ROS interface (including Gazebo 1 , Morse 2 or Webots 3 ). To demonstrate the capabilities of the toolchain, we implement a Braitenberg Vehicle (Braitenberg, 1986) in which the agent is simulated in Gazebo and the neural controller in NEST. ...
... Using rate coding, the number of neurons can be freely chosen. In the example of the Braitenberg Vehicle (Braitenberg, 1986), we use only two neurons for encoding the complete sensory input (see Section 3.1). ...
... From these results, we conclude that the latency and the dimensionality of the input are two conflicting properties, which have to be balanced for the specific use case. As an example for the usage of the toolchain, we created a Braitenberg Vehicle III "Explorer" (Braitenberg, 1986) which is simulated in the robotic environment Gazebo (see Figure 5). The Braitenberg Vehicle is implemented as a fourwheeled mobile robot with an attached laser scanner for sensory input. ...
Preprint
In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning.
... In the brain, such phenomena are known to occur at different scales, and are heavily studied at both the anatomical and computational levels. In particular, synchronization has been proposed as a general principle for temporal binding of multisensory data [42,13,25,30,47,23,32], and as a mechanism for perceptual grouping [51], neural computation [3,1,50] and neural communication [21,17,39,40]. Similar mathematical models describe fish schooling or certain types of phase-transition in physics [45]. ...
... Symmetry, in particular bilateral symmetry, has also been shown to play a key role in human perception [3]. Consider a group of oscillators having the same individual dynamics and connected together in a symmetric manner. ...
... One application of this idea is to build a fast bilateral symmetry detector (figures 8,9,10), extending the oscillator-based coincidence detectors of the previous section. Although based on a radically different mechanism, this symmetry detector is also somewhat reminiscent of the device in [3]. ...
Preprint
In a network of dynamical systems, concurrent synchronization is a regime where multiple groups of fully synchronized elements coexist. In the brain, concurrent synchronization may occur at several scales, with multiple ``rhythms'' interacting and functional assemblies combining neural oscillators of many different types. Mathematically, stable concurrent synchronization corresponds to convergence to a flow-invariant linear subspace of the global state space. We derive a general condition for such convergence to occur globally and exponentially. We also show that, under mild conditions, global convergence to a concurrently synchronized regime is preserved under basic system combinations such as negative feedback or hierarchies, so that stable concurrently synchronized aggregates of arbitrary size can be constructed. Robustnesss of stable concurrent synchronization to variations in individual dynamics is also quantified. Simple applications of these results to classical questions in systems neuroscience and robotics are discussed.
... To better understand anthropomorphism, consider the 'Braitenberg vehicle'. A thought experiment suggesting that straightforward act of the vehicles (robots) is deliberate and motivated by emotions rather than merely being governed by their preprogrammed reactions (Braitenberg, 1986). For instance, a Braitenberg vehicle might perform in a way that appears 'fearful' or 'curious' due to how it responds to environmental cues, which could cause a bystander to anthropomorphise the vehicle and attribute it with these emotions. ...
... In Braitenberg vehicles, a minor increase in the complexity of a system can quickly transform a relatively basic system into a black box for humans (Rai, 2019). As a result, people may begin to lose faith in the system as they may believe that they cannot influence the vehicle's behaviour (Braitenberg, 1986). The resulting sophisticated system's lack of transparency in its behaviour can lead people to become less empathetic towards it since they are less likely to perceive the system as a thinking being with its own emotions and goals (Braitenberg, 1986). ...
... As a result, people may begin to lose faith in the system as they may believe that they cannot influence the vehicle's behaviour (Braitenberg, 1986). The resulting sophisticated system's lack of transparency in its behaviour can lead people to become less empathetic towards it since they are less likely to perceive the system as a thinking being with its own emotions and goals (Braitenberg, 1986). The importance of empathy in acceptance can be evidenced in Chat-GPT (a large language model) which has been recently launched (OpenAI, 2022). ...
Article
Full-text available
Background Trustworthiness in Artificial Intelligence (AI) innovation is a priority for governments, researchers and clinicians; however, clinicians have highlighted trust and confidence as barriers to their acceptance of AI within a clinical application. While there is a call to design and develop AI that is considered trustworthy, AI still lacks the emotional capability to facilitate the reciprocal nature of trust. Aim This paper aims to highlight and discuss the enigma of seeking or expecting trust attributes from a machine and, secondly, reframe the interpretation of trustworthiness for AI through evaluating its reliability and validity as consistent with the use of other clinical instruments. Results AI interventions should be described in terms of competence, reliability and validity as expected of other clinical tools where quality and safety are a priority. Nurses should be presented with treatment recommendations that describe the validity and confidence of prediction with the final decision for care made by nurses. Future research should be framed to better understand how AI is used to deliver care. Finally, there is a responsibility for developers and researchers to influence the conversation about AI and its power towards improving outcomes. Conclusion The sole focus on demonstrating trust rather than the business-as-usual requirement for reliability and validity attributes during implementation phases may result in negative experiences for nurses and clinical users. Implications for practice This research will have significant implications for the way in which future nursing is practised. As AI-based systems become a part of routine practice, nurses will be faced with an increasing number of interventions that require complex trust systems to operate. For any AI researchers and developers, understanding the complexity of trust and creditability in the use of AI in nursing will be crucial for successful implementation. This research will contribute and assist in understanding nurses’ role in this change.
... Recurrent Computation . The concept of recurrence in machine learning traces back to foundational works on neural computation (Braitenberg, 1986) and LSTM networks (Gers and Schmidhuber, 2000). Modern extensions integrate recurrence into transformers through depth recurrence (Dehghani et al., 2019a;Lan et al., 2019;Ng and Wang, 2024). ...
... Recurrent Computation The concept of recurrence in machine learning traces back to foundational works on neural computation (Braitenberg, 1986) and LSTM networks (Gers and Schmidhuber, 2000). Modern extensions integrate recurrence into transformers through depth recurrence (Dehghani et al., 2019a;Lan et al., 2019;Ng and Wang, 2024), with recent improvements demonstrating algorithmic generalization via randomized unrolling (Schwarzschild et al., 2021;McLeish et al., 2024). ...
Preprint
Full-text available
Large language models (LLMs) face inherent performance bottlenecks under parameter constraints, particularly in processing critical tokens that demand complex reasoning. Empirical analysis reveals challenging tokens induce abrupt gradient spikes across layers, exposing architectural stress points in standard Transformers. Building on this insight, we propose Inner Thinking Transformer (ITT), which reimagines layer computations as implicit thinking steps. ITT dynamically allocates computation through Adaptive Token Routing, iteratively refines representations via Residual Thinking Connections, and distinguishes reasoning phases using Thinking Step Encoding. ITT enables deeper processing of critical tokens without parameter expansion. Evaluations across 162M-466M parameter models show ITT achieves 96.5\% performance of a 466M Transformer using only 162M parameters, reduces training data by 43.2\%, and outperforms Transformer/Loop variants in 11 benchmarks. By enabling elastic computation allocation during inference, ITT balances performance and efficiency through architecture-aware optimization of implicit thinking pathways.
... According to this view, different structures can perform the same function, as exemplified by the octopus mind (Godfrey-Smith 2016; King annnd Marino 2019). Similarly, behaviours taken to reveal sentience can be mimicked, as in Braitenberg's vehicles (Braitenberg 1984), or pretended, as by psychopaths (Fisher et al. 2018). The function of sentience is equally problematic, with its presumed selective advantage yet to be established (Allen 2004;Casser 2021). ...
... This analysis has shown that the strive for a univocal and universally shared definition can hide undeclared interests or metaphysical prejudices. When we pick up a definition, we should always in the first place clarify our premises (what pragmatical interests make us in need of a definition and what is our metaphysical approach) and the goals of our actions or Table 3 Elements of the two primary metaphors in sentience research (scheme from (Kampourakis 2020, p. 107) Implications associated with S A mechanism is decomposable into independent parts and their interactions; it is an explanans for phenomena (Bechtel and Abrahamsen 2005;Glennan 2002) Linked to other cognitive traits (Carruthers 1989(Carruthers , 2000 The behaviour of a mechanism can be described and replicated through algorithms (Braitenberg 1984) Associated with an organism which is a complex system, not a simple artifact, and whose behaviour in not easily predictable nor replicable (Bronfman et al. 2016;Veit 2023) Algorithms lack moral significance (Véliz 2021) human sentience, as other mental states, is a novel and non-additive emergent feature (O'Connor and Wong 2005) Attributions that P acquires in virtue of our looking at P through the lens of S Ontological continuity between organisms and artifacts ...
Article
Full-text available
Despite the growing interest in sentience, especially regarding sentience in non-human animals, there is little agreement around its evolutionary origin and distribution, ontological status or ethical relevance. One aspect of this work-in-progress situation is a panoply of definitions of sentience to be found in the literature, from intensional ones appealing to ontological features (such as awareness, agency, consciousness) to implicit extensional ones based on physiological, morphological or behavioural features. We review and classify some of the most common definitions of sentience to underline how they inevitably rely on pragmatical interests and ontological commitments which, when not explicitly declared, lead to pitfalls and fallacies when the definition is applied in practice. We claim that the obstacles, impasses and complications in the definition of sentience are consubstantial to its interest- and theory-laden, subjective nature. We suggest that, given these features, each definition should only be used in the limited epistemic and practical area it is produced for, while the general discourse around sentience is better developed through metaphorical rather than definitional language.
... Living organisms interact with their surroundings through sensory inputs in a closed-loop fashion [1]. To achieve basic closed-loop navigation in a robot it is sufficient to directly connect a robot's sensors to its motor effectors, and the specific excitatory or inhibitory connections determine the control strategy [2]. These sensor-effector connections represent a single closed-loop controller, which produces stereotyped behaviour in response to an immediate stimulus. ...
... Spelke and Kinzler [4] introduce "core knowledge" as the seemingly innate understanding of basic properties of one's environment such as physics and causality observed in newborn animals. In this work, we propose a framework where CLB such as described in [2] are combined with core knowledge to achieve multi-step ahead planning in an embodied, situated agent. Fig. 1B, a task is a closed-loop behaviour contingent on a disturbance D which enters the environment P and generates an error signal E via the robot's sensors. ...
Preprint
We present a hierarchical framework to solve robot planning as an input control problem. At the lowest level are temporary closed control loops, ("tasks"), each representing a behaviour, contingent on a specific sensory input and therefore temporary. At the highest level, a supervising "Configurator" directs task creation and termination. Here resides "core" knowledge as a physics engine, where sequences of tasks can be simulated. The Configurator encodes and interprets simulation results,based on which it can choose a sequence of tasks as a plan. We implement this framework on a real robot and test it in an overtaking scenario as proof-of-concept.
... Minimalist models of emotion-driven behavior have long served as foundational tools in both robotics and computational neuroscience. Braitenberg vehicles exemplify how simple sensorimotor connections can give rise to complex, seemingly emotional behaviors such as fear, aggression, or attraction (Braitenberg, 1984). These vehicles have been extensively utilized as computational tools to explore neural mechanisms underlying navigation and behavior (Shaikh & Rañó, 2020). ...
Preprint
Full-text available
This conceptual contribution offers a speculative account of how artificial intelligence (AI) systems might emulate emotions as experienced by humans-and plausibly by other animals. It presents a thought experiment grounded in the hypothesis that natural emotions evolved as heuristics for rapid situational appraisal and action selection, enabling biologically adaptive behavior without requiring full deliberative modeling. The text examines whether artificial systems operating in complex action spaces could similarly benefit from these principles. It is proposed that affect be in-terwoven with episodic memory by storing corresponding affective tags alongside all events. This allows AIs to establish whether present situations resemble past events and project the associated emotional labels onto the current context. These emotional cues are then combined with need-driven emotional hints. The combined emotional state facilitates decision-making in the present by modulating action selection. The low complexity and experiential inertness of the proposed architecture are emphasized as evidence that emotional expression and consciousness are, in principle, orthogonal-permitting the theoretical possibility of affective zombies. On this basis, the moral status of AIs emulating affective states is critically examined. It is argued that neither the mere presence of internal representations of emotion nor consciousness alone suffices for moral standing; rather, the capacity for self-awareness of inner emotional states is posited as a necessary condition. A complexity-based criterion is proposed to exclude such awareness in the presented model. Additional thought experiments are presented to test the conceptual boundaries of this framework.
... Braitenberg vehicles [18] demonstrate how purely reactive sensor-actuator links can yield emergent "intelligent-looking" behaviors. Although lacking complex cognition, they meet basic observer criteria by receiving sensory data, updating motor outputs, and influencing their environment in a feedback loop. ...
Preprint
Full-text available
We propose a formal framework for understanding and unifying the concept of observers across physics, computer science, philosophy, and related fields. Building on cybernetic feedback models, we introduce an operational definition of minimal observers, explore their role in shaping foundational concepts, and identify what remains unspecified in their absence. Drawing upon insights from quantum gravity, digital physics, second-order cybernetics, and recent ruliological and pregeometric approaches, we argue that observers serve as indispensable reference points for measurement, reference frames, and the emergence of meaning. We show how this formalism sheds new light on debates related to consciousness, quantum measurement, and computational boundaries; by way of theorems on observer equivalences and complexity measures. This perspective opens new avenues for investigating how complexity and structure arise in both natural and artificial systems.
... PS #2: Generating Complex Behavior -This problem set both demonstrated how easy it was to construct systems that (Smilkov and Carter 2022) used to demonstrate learning in neural networks in problem set #3, (C) video submissions created by a student to explain conversational agents as part of problem set #5, and (D) submissions generated using a diffusion-based image generator when asked to create a logo for this course as part of problem set #4. generated rich, uncontrolled behavior and how difficult it was to guess the internal structure of an agent by observing its behavior alone. Much of this problem set focused on the elegant examples of agent construction developed by Valentino Braitenberg (Braitenberg 1986). Students were familiarized with a simple 2-wheel, 2-sensor virtual agent that moved about on a flat 2-dimensional plane. ...
Article
Full-text available
The rapid and nearly pervasive impact of artificial intelligence on fields as diverse as medicine, law, banking, and the arts has made many students who would never enroll in a computer science class become interested in understanding elements of artificial intelligence. Fueled by questions about how this technology would change their own fields, these students are not seeking to become experts in building AI systems but instead are searching for a sufficient understanding to be safe, effective, and informed users. In this paper, we describe a first-of-its-kind course offering, "Artificial Intelligence for Future Presidents" designed and taught during the spring of 2024. We share rationale on the design and structure of the course, consider how best to convey complex technical information to students without the background in programming or mathematics, and consider methods for supporting an understanding of the limits of this technology.
... We modeled an agent with minimal neural dynamics that could use sensorimotor coordination with the stimulus sources and the movements of other agents in order to move towards a candidate site. Our agent architecture was based on a minimal Braitenberg vehicle (52), which is a self-driven agent with a very simple architecture: two sensors directly control two motors. To give our agent intrinsic neural dynamics, we connected two oscillator nodes to the sensors (loosely representing sensory brain regions; nodes 1 and 2 in Fig. 1A) and two oscillator nodes connected to the direction of the movement of the agent (loosely representing motor regions; nodes 3 and 4 in Fig. 1A). ...
Article
Full-text available
Collective decision making using simple social interactions has been studied in many types of multiagent systems, including robot swarms and human social networks. However, existing multiagent studies have rarely modeled the neural dynamics that underlie sensorimotor coordination in embodied biological agents. In this study, we investigated collective decisions that resulted from sensorimotor coordination among agents with simple neural dynamics. We equipped our agents with a model of minimal neural dynamics based on the coordination dynamics framework, and embedded them in an environment with a stimulus gradient. In our single-agent setup, the decision between two stimulus sources depends solely on the coordination of the agent’s neural dynamics with its environment. In our multiagent setup, that same decision also depends on the sensorimotor coordination between agents, via their simple social interactions. Our results show that the success of collective decisions depended on a balance of intra-agent, interagent, and agent–environment coupling, and we use these results to identify the influences of environmental factors on decision difficulty. More generally, our results illustrate how collective behaviors can be analyzed in terms of the neural dynamics of the participating agents. This can contribute to ongoing developments in neuro-AI and self-organized multiagent systems.
... Predictive control enhances PEM reduction because it allows systems to enact efficient and context-sensitive actions. 67 Here, the system's actions can change its sensory inputs (typically through behavioral allostasis). 68 These actions are tuned by the interplay between predictions (descending the hierarchy) and prediction errors (ascending the hierarchy). ...
Article
Full-text available
Human regulatory systems largely evolved under conditions of food and information scarcity but are now being forced to deal with abundance. The impact of abundance and the inability of human regulatory systems to adapt to it have fed a surge in dual health challenges: (1) a rise in obesity related to food abundance and (2) a rise in stress and anxiety related to information abundance. No single framework has been developed to describe why and how the transition from scarcity to abundance has been so challenging. Here, we provide a speculative model based on predictive processing. We suggest that whereas scarcity (above destructive lower bounds like famine or information voids) preserves the fidelity of the relationship between prediction errors and predictions, abundance distorts this relationship. Furthermore, prediction error minimization is enhanced under scarcity (as the number of competing states in the niche is restricted), whereas the opposite is true under abundance. We also discuss how abundance warps the fundamental drive for seeking novelty by fueling the brain's exploration (as opposed to exploitation) mode. Ameliorative strategies for regulating food and information abundance may largely depend on simulating scarcity, that environmental condition to which human regulatory systems have adapted over millennia.
... Thus, when one's goal is to understand someone else by simple observation, searching for the most parsimonious deconstruction that explains this person's behavior or a situation is continuously ongoing and part of regulating behavior to establish common grounds (Vasil et al. 2019). It is similar to an engineering process similar to what has been described by Braitenberg as "synthetic psychology" (Braitenberg 1984): a behavior is most easily understood by reconstructing it from the most simple elements rather than by complicated speculations. In a sense, understanding is reverse-engineering of predictive processing. ...
Article
Full-text available
Within the concept of the extended mind, the active modification of external objects, externalizations, is seen as an auxiliary means to adapt to the environment. Toolmaking and use are advanced stages of externalizations that evolve. All past or present tools can, theoretically, be precisely assigned a location in an evolutionary tree with predecessors and progeny. Tools are reliably replicated, modified, and selected by their ability to facilitate human needs. Tool evolution, therefore, fulfills Darwinian criteria where the material tool is the phenotype and the instruction to build it is the code. The ostensive triangle consisting of a pointing individual, an observing individual, and a pointed-at object or tool is the germ cell of social transmission of instructions. Tool-building instructions ultimately can be reduced to distinct sequences of motor acts that can be recombined and are socially transmitted. When executed, they replicate tools for the reward of convenience or improved fitness. Tools elicit affordances relating to their use that synchronize different individuals’ perceptions, result in psychological “understanding,” and thereby modify social networks. Massive tool fabrication as present today in the “tool-sphere” has, therefore, accelerated prosociality and over time led to the acquisition of an individual’s third person perspective. The entangled biological evolution accelerated the ongoing cumulative cultural evolution by selecting traits facilitating social transmission. In this context, tool evolution and the corresponding acquired individual instructional content is a precondition to the emergence of higher cognition and “consciousness.” A neuroscience investigating externalizations as the starting point of this process is urgently needed.
... This explanatory and predictive style, which is called here 'folk cognitivist', involves the functional decomposition of the robotic system into modules processing representations. This claim is supported with reference to explanations of robotic behaviors acquired in the framework of a Braitenberg-style [16] robo-ethological project carried out with children. It is also claimed that the folk cognitivist stance cannot be simply equated with the design stance as defined by Dennett. ...
Chapter
The panel investigates the attribution of mental states and cognition to robots from a philosophical perspective, taking into account epistemological, ethical and technological (design) dimensions. These interconnected dimensions are explored through four talks. The first talk lays the groundwork by analyzing the different styles people may adopt to model the mind of robots. On these grounds, the second talk focuses on the role that emotion attribution to robots has in shaping our interactions with social robots. The third talk deals with robots’ decision-making capabilities in the context of social assistive robotics, with an eye to ethical implications. The fourth talk closes the panel, investigating how an enactive conception of intentionality impacts both our understanding of human-robot interaction and the design of robotic interfaces and architectures.
... In the full spirit of the metasensor, the proof of concept application presents a robotic system-initially designed to perform a given task-whose behavior is to be modified by the presence of the metasensor. Thus, we consider a Braitenberg-like agent as the robot to be controlled, which performs a task of avoiding sources of light (i.e., the "fear" behavior) [51]. At this point, a metasensor is added to the robot, which will be responsible for imparting a different behavior to the robot than the original one. ...
Article
Full-text available
Sensors play a fundamental role in achieving the complex behaviors typically found in biological organisms. However, their potential role in the design of artificial agents is often overlooked. This often results in the design of robots that are poorly adapted to the environment, compared to their biological counterparts. This paper proposes a formalization of a novel architectural component, called a metasensor, which enables a process of sensor evolution reminiscent of what occurs in living organisms. The metasensor layer searches for the optimal interpretation of its input signals and then feeds them to the robotic agent to accomplish the assigned task. Also, the metasensor enables a robot to adapt to new tasks and dynamic, unknown environments without requiring the redesign of its hardware and software. To validate this concept, a proof of concept is presented where the metasensor changes the robot’s behavior from a light avoidance task to an area avoidance task. This is achieved through two different implementations: one hand-coded and the other based on a neural network substrate, in which the network weights are evolved using an evolutionary algorithm. The results demonstrate the potential of the metasensor to modify the behavior of a robot through sensor evolution. These promising results pave the way for novel applications of the metasensor in real-world robotic scenarios, including those requiring online adaptation.
... The aim of these experiments is to gain a better understanding of the cognitive processes involved in interactions with so-called social robots. Extending work carried out in the field of experimental and synthetic psychology (Heider & Simmel, 1944;Braitenberg, 1986), social anthropology (Grimaud & Paré, 2011;Becker, 2011;Vidal, 2012), and behavioral objects design (Bianchini et al., 2015), the group is adopting an approach centered not on resemblance to the human body, but on a robotic device whose behavior is a key factor in creating and maintaining a framework for sociallike interaction. ...
Article
How do we make sense of a robot's behavior? In this article, we address what has often been presented as a human natural tendency to become animist when facing a social-like machine. Leaning on an interdisciplinary experiment, we will focus on the terms used by a group of human participants to describe and qualify the behavior of a robotic lamp. By studying the semantic spaces occupied by the words used to describe the movements of the machine, we will see that the meaning given to its activity is based on various known elements which also depend directly on the very experience of the participants. This experiment will help us to address the cognition involved during human-robot interactions.
... The schematics of these robots and automated machines were often based on the intuition of socalled wired intelligence. According to this design principle, different behaviors from the machines can be obtained by directly connecting actuators and sensors, with no need for dedicated controllers (23). In our case study, we design our systems to follow similar design rules and constraints to avoid the use of external electronic controllers in pneumatic circuits. ...
Article
Full-text available
Decision-making based on environmental cues is a crucial feature of autonomous systems. Embodying this feature in soft robots poses nontrivial challenges on both hardware and software that can undermine the simplicity and autonomy of such devices. Existing pneumatic electronics-free soft robots have so far mostly been approached by using system fluidic circuit architectures analogous to digital electronics. Instead, here we design dedicated pneumatic coding blocks equivalent to If , If...break , and For software control statements, which are based on the analog nature of nonlinear mechanical components. We demonstrate that we can combine these coding blocks into programs to implement sequences and to control an electronics-free autonomous soft gripper that switches between behaviors based on interactions with the environment. As such, our strategy provides an alternative approach to designing complex behavior in soft robotics that is more reminiscent of how functionalities are also encoded in the body of living systems.
... Having said that, while scrolling down a list, pushing a lever or kicking a ball are only very loosely associated with the 'feel' of intentions through resistance, adding a little complexity to the interaction quickly does wonders. Think of how a 'Braitenberg vehicle' directly connects sensors to motor actions, and yet achieves surprisingly complex behavior [14] that seems purposeful and 'alive'. Think of how little is required for an animal or even a plant to have 'goals'. ...
Preprint
Just as AI has moved away from classical AI, human-computer interaction (HCI) must move away from what I call 'good old fashioned HCI' to 'new HCI' - it must become a part of cognitive systems research where HCI is one case of the interaction of intelligent agents (we now know that interaction is essential for intelligent agents anyway). For such interaction, we cannot just 'analyze the data', but we must assume intentions in the other, and I suggest these are largely recognized through resistance to carrying out one's own intentions. This does not require fully cognitive agents but can start at a very basic level. New HCI integrates into cognitive systems research and designs intentional systems that provide resistance to the human agent.
... The simple robots they built had two wheels, two motors, and two light sensors [102,103]. This type of robot is well known from Braitenberg's famous book Vehicles [104], which argued that connecting sensor inputs to motor outputs in a particular way results in simple light-following behavior. For example, when right wheel is driven proportionally to how much light the left sensor detects and the left wheel is similarly driven by the right light sensor, the robot will move towards the light. ...
Preprint
Full-text available
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.
... A well-known example of such an idealized trajectory for nervous systems is provided by Braitenberg (Braitenberg, 1984). He formulated a sequence of configurations starting with a single neural connection between a sensor and an effector to which more connections could be added, eventually leading to increasingly complex neural circuits. ...
Preprint
To understand how neurons and nervous systems first evolved, we need an account of the origins of neural elongations: Why did neural elongations (axons and dendrites) first originate, such that they could become the central component of both neurons and nervous systems? Two contrasting conceptual accounts provide different answers to this question. Braitenberg's vehicles provide the iconic illustration of the dominant input-output (IO) view. Here the basic role of neural elongations is to connect sensors to effectors, both situated at different positions within the body. For this function, neural elongations are thought of as comparatively long and specific connections, which require an articulated body involving substantial developmental processes to build. Internal coordination (IC) models stress a different function for early nervous systems. Here the coordination of activity across extended parts of a multicellular body is held central, in particular for the contractions of (muscle) tissue. An IC perspective allows the hypothesis that the earliest proto-neural elongations could have been functional even when they were initially simple short and random connections, as long as they enhanced the patterning of contractile activity across a multicellular surface. The present computational study provides a proof of concept that such short and random neural elongations can play this role. While an excitable epithelium can generate basic forms of patterning for small body-configurations, adding elongations allows such patterning to scale up to larger bodies. This result supports a new, more gradual evolutionary route towards the origins of the very first full neurons and nervous systems.
... The trajectories are provided to Turing Learning using a reactive control architecture (Brooks, 1991). Evidence indicates that reactive behavioral rules are sufficient to produce a range of complex collective behaviors in both groups of natural and artificial agents (Braitenberg, 1984;Arkin, 1998;Camazine et al., 2003). Note that although reactive architectures are conceptually simple, learning their parameters is not trivial if the agent's inputs are not available, as is the case in our problem setup. ...
Preprint
We propose Turing Learning, a novel system identification method for inferring the behavior of natural or artificial systems. Turing Learning simultaneously optimizes two populations of computer programs, one representing models of the behavior of the system under investigation, and the other representing classifiers. By observing the behavior of the system as well as the behaviors produced by the models, two sets of data samples are obtained. The classifiers are rewarded for discriminating between these two sets, that is, for correctly categorizing data samples as either genuine or counterfeit. Conversely, the models are rewarded for 'tricking' the classifiers into categorizing their data samples as genuine. Unlike other methods for system identification, Turing Learning does not require predefined metrics to quantify the difference between the system and its models. We present two case studies with swarms of simulated robots and prove that the underlying behaviors cannot be inferred by a metric-based system identification method. By contrast, Turing Learning infers the behaviors with high accuracy. It also produces a useful by-product - the classifiers - that can be used to detect abnormal behavior in the swarm. Moreover, we show that Turing Learning also successfully infers the behavior of physical robot swarms. The results show that collective behaviors can be directly inferred from motion trajectories of individuals in the swarm, which may have significant implications for the study of animal collectives. Furthermore, Turing Learning could prove useful whenever a behavior is not easily characterizable using metrics, making it suitable for a wide range of applications.
... We point out that avoiding unwarranted stops is an equally important performance criterion, which we call 'mobility'. • With the help of agent-based modeling, we show that a population of Braitenberg-vehicle-like agents (Braitenberg, 1986) using GRM as their sole collisionavoidance mechanism can be both safe from collisions and mobile when compared to looming-based agents. ...
Preprint
Brains and sensory systems evolved to guide motion. Central to this task is controlling the approach to stationary obstacles and detecting moving organisms. Looming has been proposed as the main monocular visual cue for detecting the approach of other animals and avoiding collisions with stationary obstacles. Elegant neural mechanisms for looming detection have been found in the brain of insects and vertebrates. However, looming has not been analyzed in the context of collisions between two moving animals. We propose an alternative strategy, Generalized Regressive Motion (GRM), which is consistent with recently observed behavior in fruit flies. Geometric analysis proves that GRM is a reliable cue to collision among conspecifics, whereas agent-based modeling suggests that GRM is a better cue than looming as a means to detect approach, prevent collisions and maintain mobility.
... They have flipped switches in the controller of a feed-forward device. Rewiring can induce striking changes in behavior in simple simulated organisms, as elegantly demonstrated by Braitenberg (1986). These effects may not be limited to insects and rodents: T. gondii infects human brains (Parlog et al., 2015), and is associated with behavioral changes and psychopathologies (Tyebji et al., 2019). ...
Article
Full-text available
Agency is action aimed at goals selected by an agent. A deterministic world view leaves scant room for agency. To reconcile the arguments, we represent action as nested control systems, ranging from clearly deterministic to clearly volitional. Negative feedback minimizes deviations from setpoints (goals). Goals are determined by higher modules in a hierarchy of systems, ranging from gamma-efferent spindles through reflexes to operant responses; these last, and the larger system that contains them, called the Self, comprise volitional agents. When operants become habitual they descend to closed teleonomic systems—automaticity. Change in emotional states, and unpredicted changes in the context–raise them back to full volitional systems. At the highest level is the Self—the brain’s model of the agent. When aroused out of open teleonomic functioning, it must reconsider means and ends. It does so by simulating action plans, using the same neural systems it uses to effect them. The simulated stimuli and responses are conscious, approximating their perceptions as experienced in real time; this verisimilitude gives them their hedonic value. Positive feedback plays a key role in these complex adaptive systems, as it focuses and holds attention on the most salient percepts and goals, permitting the self-organization of action plans. The Self is not a separate entity, but a colloquy of command modules wearing the avatar of the agent. This system is put into correspondence with Grossberg’s Adaptive Resonance Theory. Free will and determinism emerge not as binary opposites, but the modulating inputs to a spectrum of systems.
... A system need not be particularly intelligent to be autonomous. For example, the work of Braitenberg [8] showed how seemingly complex behavior can emerge from the interaction of relatively simple systems and their environment. Rather, it is a system's ability to make its own decisions in a complex environment that is the salient aspect of autonomy. ...
Preprint
Full-text available
Autonomous systems use independent decision-making with only limited human intervention to accomplish goals in complex and unpredictable environments. As the autonomy technologies that underpin them continue to advance, these systems will find their way into an increasing number of applications in an ever wider range of settings. If we are to deploy them to perform safety-critical or mission-critical roles, it is imperative that we have justified confidence in their safe and correct operation. Verification is the process by which such confidence is established. However, autonomous systems pose challenges to existing verification practices. This paper highlights viewpoints of the Roadmap Working Group of the IEEE Robotics and Automation Society Technical Committee for Verification of Autonomous Systems, identifying these grand challenges, and providing a vision for future research efforts that will be needed to address them.
... It can either be studied by attempting to reverse engineer its functionality using (neurobehavioural) observational data in behaving biological agents or by designing and simulating adaptive artificial agents in complex environments to capture mechanisms of natural intelligence [2]. In this work, we are motivated by Braitenberg's law of uphill analysis and downhill invention which states that it is easier to understand complex systems by reconstructing them from basic components rather than by estimating them from observations [3]. Thus, we seek to explore the latter option of modelling traits of natural intelligence from basic building blocks, as an approach to foster their understanding. ...
Article
Full-text available
Foraging for resources in an environment is a fundamental activity that must be addressed by any biological agent. Modelling this phenomenon in simulations can enhance our understanding of the characteristics of natural intelligence. In this work, we present a novel approach to model foraging in-silico using a continuous coupled dynamical system. The dynamical system is composed of three differential equations, representing the position of the agent, the agent’s control policy, and the environmental resource dynamics. Crucially, the control policy is implemented as a parameterized differential equation which allows the control policy to adapt in order to solve the foraging task. Using this setup, we show that when these dynamics are coupled and the controller parameters are optimized to maximize the rate of reward collected, adaptive foraging emerges in the agent. We further show that the internal dynamics of the controller, as a surrogate brain model, closely resemble the dynamics of the evidence accumulation mechanism, which may be used by certain neurons of the dorsal anterior cingulate cortex region in non-human primates, for deciding when to migrate from one patch to another. We show that by modulating the resource growth rates of the environment, the emergent behaviour of the artificial agent agrees with the predictions of the optimal foraging theory. Finally, we demonstrate how the framework can be extended to stochastic and multi-agent settings.
... the actuation idiosyncrasies θ a i ∈ R p , and the unmodeled disturbances w i ∈ R d . Depending on the physical sensor(s) used, the ultimate goal is to directly connect these outputs to drive actuators when ready to scale up essentially a swarm of Braitenberg vehicles [45]. As this paper is focused on the exploration and discovery of finding useful hard-wireable reactive controllers, our robots do use a CPU and software to parameterize the map from sensor readings to actuators. ...
Preprint
Despite significant research, robotic swarms have yet to be useful in solving real-world problems, largely due to the difficulty of creating and controlling swarming behaviors in multi-agent systems. Traditional top-down approaches in which a desired emergent behavior is produced often require complex, resource-heavy robots, limiting their practicality. This paper introduces a bottom-up approach by employing an Embodied Agent-Based Modeling and Simulation approach, emphasizing the use of simple robots and identifying conditions that naturally lead to self-organized collective behaviors. Using the Reality-to-Simulation-to-Reality for Swarms (RSRS) process, we tightly integrate real-world experiments with simulations to reproduce known swarm behaviors as well as discovering a novel emergent behavior without aiming to eliminate or even reduce the sim2real gap. This paper presents the development of an Agent-Based Embodiment and Emulation process that balances the importance of running physical swarming experiments and the prohibitively time-consuming process of even setting up and running a single experiment with 20+ robots by leveraging low-fidelity lightweight simulations to enable hypothesis-formation to guide physical experiments. We demonstrate the usefulness of our methods by emulating two known behaviors from the literature and show a third behavior `discovered' by accident.
... These NCAs also expand how we apply and study these complex dynamical systems. Before, the emergence of interesting patterns from simple rules (Gardner, 1970) was in focus, similar to the complex behavior from simple circuits (Braitenberg, 1986); however, from the rules and start conditions to the emergent behavior. However, because their rule-implementing network can be trained by deep learning, NCA now allows us to go from the final pattern to the underlying rules. ...
Conference Paper
Full-text available
... The replicants in the Blade Runner dystopia are bioengineered, organic entities, not the electromechanical robots that exist today. Nevertheless, we consider that an embodied (robotic) computational modeling approach, or what we might call 'synthetic psychology' [6,7], is a promising route to understanding human memory and its relationship to sense of self [8][9][10]. This is the topic we will explore in this article. ...
Article
Full-text available
Episodic memories are experienced as belonging to a self that persists in time. We review evidence concerning the nature of human episodic memory and of the sense of self and how these emerge during development, proposing that the younger child experiences a persistent self that supports a subjective experience of remembering. We then explore recent research in cognitive architectures for robotics that has investigated the possibility of forms of synthetic episodic and autobiographical memory. We show that recent advances in generative modeling can support an understanding of the emergence of self and of episodic memory, and that cognitive architectures which include a language capacity are showing progress towards the construction of a narrative self with autobiographical memory capabilities for robots. We conclude by considering the prospects for a more complete model of mental time travel in robotics and the implications of this modeling work for understanding human episodic memory and the self in time. This article is part of the theme issue ‘Elements of episodic memory: lessons from 40 years of research’.
... Les fournisseurs de matériel pédagogique proposent différents robots plancher qui utilisent, comme les véhicules de Braitenberg (1984) et la tortue de chez Jeulin, deux roues motrices permettant de générer le déplacement. Parmi les onze principaux fournisseurs de matériels pédagogiques pour les écoles françaises, quatre ont accepté de nous communiquer les robots plancher qui sont les plus vendus. ...
Chapter
Cet ouvrage présente un ensemble de recherches sur les conditions d'enseignement-apprentissage de l'informatique à l'école primaire. Mobilisant des chercheurs en éducation, informatique, psychologie et linguistique, il examine les défis spécifiques de l'enseignement de l'informatique à l’école à travers une étude des politiques publiques, des pratiques enseignantes, et des perceptions des élèves et des enseignants. Il propose également des ressources pédagogiques et des outils de scénarisation pour soutenir l'enseignement de la pensée informatique, visant à combiner rigueur scientifique et diffusion accessible des savoirs. L’ouvrage « Enseigner, apprendre, former à l’informatique à l’école : regards croisés » s'adresse à un large public, y compris les chercheurs, les formateurs et les enseignants intéressés par l'enseignement de l'informatique.
... Prior works have demonstrated the concept of morphological computation in physical bodies, designing or evolving a body plan which executes a given behavior with as little neural or control intelligence as possible [1,9,11,12,16,18,32], but these body plans tend to focus on finding an effective morphology for a single desired behavior. Conversely, very simple adaptive behaviors in neural circuits demonstrate minimally cognitive systems that adapt the behavior of a fixed body plan to environmental stimuli via extraordinary simple neural circuitry [2,7]. Bridging the gap between the two, prior work on brain-body cooptimization attempt to simultaneously evolve simple controllers and evolving robot bodies [17,5,19,23,36] or develop approaches to rapidly adapt a controller to an ever-evolving body plan [10,28,29]. ...
Preprint
Full-text available
It is prevalent in contemporary AI and robotics to separately postulate a brain modeled by neural networks and employ it to learn intelligent and adaptive behavior. While this method has worked very well for many types of tasks, it isn't the only type of intelligence that exists in nature. In this work, we study the ways in which intelligent behavior can be created without a separate and explicit brain for robot control, but rather solely as a result of the computation occurring within the physical body of a robot. Specifically, we show that adaptive and complex behavior can be created in voxel-based virtual soft robots by using simple reactive materials that actively change the shape of the robot, and thus its behavior, under different environmental cues. We demonstrate a proof of concept for the idea of closed-loop morphological computation, and show that in our implementation, it enables behavior mimicking logic gates, enabling us to demonstrate how such behaviors may be combined to build up more complex collective behaviors.
... The crucial role of embodiment, including its ecological aspects, for developmental robotics is apparently challenged by "Vehicles" (Braitenberg, 1986) and the notion of "intelligence without representation" (Brooks, 1991): it is suggested that purposive behavior does not necessarily have to rely on accurate models of the environment, but rather might be the result of the close interaction of a simple control architecture with a complex world. In other words, according to such view there is no need to build enduring, full-scale internal models of the world, because the environment can be probed and re-probed as needed, thus closing the loop in real-time. ...
Article
Full-text available
The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems , similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.
... It includes the centerline C AL , the AL band B AL and the return band B return . The return band has been inspired by "vehicles" of Braitenberg [13] i.e. if the agent is in the "return zone" it still may come back to the AL region under some conditions. ...
... By way of motivation, here are two whimsical (but useful) past examples. Braitenberg vehicles [12] are interesting thought experiments, involving the behavior of one or more simple vehicles that contain nothing but motors and sensors (specifically, there is no processor that computes what the vehicle does). It is fascinating that complex behaviors (which could be described using terms such as 'love', 'fear' 'aggression') result from the interactions between a vehicle and its environment, and with other vehicles. ...
Chapter
Full-text available
The central point made in this paper is this: human-level grounded meaning in an agent can only result from directly experiencing the world, which in turn can only be possible via embodiment (coupled with ‘embrainment’ - a suitable brain architecture). Via embodiment, we humans are able to internalize our direct interactions the world, in addition to being able to associate symbols with them—this allows us to communicate via symbols, thereby externalizing our individual representations for mutual, collective benefit. Lacking embodiment, in contrast, today's AI agents are able to only operate at a derivative, symbolic level, without being able to experientially understand, or relate to, the meaning behind the symbols they process. The only way to enable artificial agents to acquire ‘first-hand’ meaning for symbols, would be to provide suitable embodiment for them. Also, a case is made for machines capable of exhibiting non-symbolic intelligence via analog embodiment, to complement symbol-originated intelligence.
... Manually programmed IBMs are used in simplistic animal models such as Braitenberg vehicles [10]. They are also ubiquitous in game AI, where models of animals and imaginary creatures are controlled by finite state machines, sometimes with added noise [9]. ...
Chapter
Full-text available
We present an AI-based ecosystem simulator that uses three-dimensional models of the terrain and animal models controlled by deep reinforcement learning. The simulations take place in a game engine environment, which enables continuous visual observation of the ecosystem model. The terrain models are generated from geographic data with altitudes and land cover type. The animal models combine three-dimensional conformation models with animation schemes and decision-making mechanisms trained with deep reinforcement learning in increasingly complex environments (curriculum learning). We show how AI tools of this kind can be used for modeling the development of specific ecosystems with and without different forms of economic activities. In particular, we show how they might be used for modeling local biodiversity effects of land cover change, exploitation of natural resources, pollution, invasive species, and climate change.
Article
Full-text available
In the visual ‘teach-and-repeat’ task, a mobile robot is expected to perform path following based on visual memory acquired along a route that it has traversed. Following a visually familiar route is also a critical navigation skill for foraging insects, which they accomplish robustly despite tiny brains. Inspired by the mushroom body structure in the insect brain and its well-understood associative learning ability, we develop an embodied model that can accomplish visual teach-and-repeat efficiently. Critical to the performance is steering the robot body reflexively based on the relative familiarity of left and right visual fields, eliminating the need for stopping and scanning regularly for optimal directions. The model is robust against noise in visual processing and motor control and can produce performance comparable to pure pursuit or visual localisation methods that rely heavily on the estimation of positions. The model is tested on a real robot and also shown to be able to correct for significant intrinsic steering bias.
Thesis
Artificial intelligence (AI) and machine learning (ML) have the potential to unlock a deeper understanding of the human experience in health settings. This research investigated the integration of AI and ML into mental health care, specifically focusing on their application within emergency departments (EDs). EDs serve as critical access points for mental health services, yet they face significant challenges, including overcrowding, long wait times, and a scarcity of specialised resources. The study explored how AI and ML could be harnessed to address these challenges, improve clinical decision making, and enhance patient outcomes. The thesis begins by framing the problem within the broader context of mental health care, highlighting the significance of the study. It underscores the importance of the Sustainable Development Goals, ethical AI use, and the inclusion of lived experience in shaping mental healthcare strategies. This foundational approach emphasises the need for an integrated model of care that balances technological innovation with ethical considerations and human-centred care. The research explored the role of technology in mental health, particularly its impact on individuals experiencing conditions such as suicidality. The dual nature of technology was examined, recognising its capacity to drive innovation and empowerment while also acknowledging the potential for technology to induce anxiety, fear, or paranoia among vulnerable populations. This exploration underscores the need for a careful and ethical approach to the integration of AI in mental health care, considering the broader implications for patients and clinicians alike. A comprehensive literature review examines the concept of trust in AI, particularly in the context of health care. The review challenges the assumption that AI systems are inherently trustworthy and explores the anthropomorphisation of AI, questioning whether these systems should be deemed deserving of trust. The study also highlights the rapid advancements in AI and ML, particularly through the development of large language models. These models have significantly expanded the capabilities of natural language processing and offer considerable potential for healthcare applications. However, the research identified ethical challenges associated with these technologies, including bias, privacy concerns, and the need for transparency in AI deployment. The thesis advocates the responsible development of AI systems that enhance rather than replace human clinical judgement. The methodological framework employed in this research was designed to ensure a thorough understanding of the needs and concerns of various stakeholders, including clinicians, patients, and interdisciplinary experts. The study utilised design thinking, coupled with a retrospective cohort analysis of sociodemographic factors and presentation features of individuals seeking mental health care in EDs. Ethical considerations were prioritised throughout the research, with a particular focus on cultural sensitivity and the inclusion of diverse perspectives. The findings of the research provide a detailed analysis of the complexities involved in mental health care delivery in EDs. This research underscores the essential role of EDs in providing frontline mental health services while also identifying significant challenges, such as the need for more specialised resources and the ethical integration of AI into clinical practice. The research demonstrates the potential of ML to improve service provision, particularly through predictive modelling and clinical decision support systems. However, it also highlights limitations related to data quality, trust, and ethical considerations, emphasising the importance of a cautious and responsible approach to AI integration. In conclusion, the thesis discusses the broader implications of the research findings for clinical practice and policy. It offers recommendations for future research, including the evaluation of safety planning interventions, the exploration of collaborative care models, and the utilisation of ML for enhanced predictive modelling in mental health settings. The research advocates a comprehensive and integrated approach to mental health care that leverages the potential of AI while maintaining a strong ethical framework. Overall, the research contributes to the growing body of knowledge on the role of AI and ML in mental health care. It highlights the need for trust, transparency, and ethical considerations in the development of AI-based tools, ensuring that these technologies complement human clinical judgement rather than replace it. The findings suggest that, with careful and ethical deployment, AI has the potential to significantly improve mental healthcare delivery, leading to more effective, equitable, and patient-centred services.
Conference Paper
Full-text available
Nolfi e outros pesquisadores aplicaram as idéias de Langton e de Holland na abordagem de robótica evolucionária, na qual uma população inicial de robôs, com diferentes genótipos, é criada aleatoriamente. Dentro desta abordagem evolucionária, podemos adicionalmente tomar o comportamento adaptativo como um elemento particular de estudo e uma tentativa de entender os fenômenos da vida natural, através de sua reprodução em sistemas artificiais. Conference: II ENCONTRO DA PÓS-GRADUAÇÃO EM FILOSOFIA, Faculdade de Filosofia e Ciências, Universidade Estadual Paulista, UNESP. At: Marília, SP - Brazil
Article
Full-text available
La capacidad de los animales de sentir dolor es objeto de reflexión filosófica desde la antigüedad, y conlleva preguntas alrededor de qué es la sensibilidad, dónde se ubica en la estructura ontológica del mundo y cuál es su significancia en el discurso moral. La relevancia de estas preguntas y su vinculación mutua varía a lo largo de la historia de la filosofía: a partir de finales del siglo XIX, y en creciente medida durante las últimas décadas, el interés para la ontología de la sensibilidad animal ha sido sustituido por preocupaciones alrededor del valor moral del sufrimiento frente a otros marcadores de dignidad moral, como la racionalidad. Este artículo propone un recorrido histórico y un análisis comparado de las discusiones sobre sensibilidad animal en filosofía desde la antigüedad hasta la actualidad.
Article
Full-text available
The article addresses the relationship between the concepts of “artificial intelligence” (AI) and “artificial neural networks” (ANNs) in forensic context. Over the past few years there has been some growing scientific interest in applying these technologies in forensic examination, which makes the issue of how these developments are currently impacting forensic practice and how they might influence it in the long term quite relevant. The identification of their specific characteristics is expected to facilitate a more efficient integration of AI and ANNs into forensic activities at the methodological, legal, and organizational levels. To illustrate the general relationship between artificial intelligence (AI) and neural networks, and to demonstrate how they differ, the author provides a brief historical overview of the development of AI concepts and a description of the operating principles of certain AI systems, specifically artificial neural networks. The author also proposes the ways to integrate AI and neural networks into forensic activities at both theoretical and practical levels.
Conference Paper
Full-text available
Autonomous agents represent a special class of systems in the context of Artificial Intelligence (AI) and Artificial Life. As embodied AI, they are systems that have a certain understanding of their environment and adapt their behavior accordingly. They are used in a wide range of fields, from robotics to video games, but also in artistic practice. Craig W. Reynolds' work on steering behavior and flocking simulation provided an important framework for simulating the motion of autonomous agents. Originally intended for use in computer games, various adaptations and inspired applications can be found in a wide variety of domains, including spatial composition. The motion of autonomous agents in 3D space can be used to control spatial sound sources and other virtual objects to realize life-like behavior in improvisations or conducted performances. In this paper, we describe the implementation of steering behaviors proposed by Reynolds as autonomous agents in the modular 3D engine IVES for the Max development environment. Furthermore, we demonstrate the potential to realize spatial audiovisual compositions and performances with the improvisational behavior of autonomous agents in combination with a 3D engine specialized for art production.
Article
Merging the functional model of the psychic apparatus and the functional model of the nervous system into one model—I call this combined model the Ψ‑organ—and thus bridge the gap between these two worlds in a scientific way, has been a long-awaited wish. Using today’s methods of information engineering theories makes it achievable. In the SiMA basic project—the first concepts date back to 1998—this idea was consistently pursued until 2018. The results are presented in the present article. First, the area of conflict is discussed, then the fundamental considerations and the scientific basis are described. This provides the basis for the model development of the Ψ‑organ, which is briefly outlined. Finally, the two main areas of application are addressed: first the validation and further development of the theories of psychoanalysis by its experts through simulation experiments and secondly the implementation in the field of artificial intelligence.
Chapter
Based on the network theory of culture at the macro level outlined in Chap. 12, this chapter presents ideas for the development of AI applications that advance cultural exchanges, multicultural understanding, and interdisciplinary cultures.
Preprint
Full-text available
As the relevance of neuroscience in education grows, effective methods for teaching this complex subject in high school classrooms remain elusive. Integrating classroom experiments with brain-based robots offers a promising solution. This paper presents a structured curriculum designed around the use of camera-equipped mobile robots which enables students to construct and explore artificial neural networks. Through this hands-on approach, students engage directly with core concepts in neuroscience, learning to model spiking neural networks, decision-making processes in the basal ganglia, and principles of learning and memory. The curriculum not only makes challenging neuroscience concepts accessible and engaging but also demonstrates significant improvements in students’ understanding and self-efficacy. By detailing the curriculum’s development, implementation, and educational outcomes, this study outlines a scalable model for incorporating advanced scientific topics into secondary education, paving the way for a deeper student understanding of both theoretical neuroscience and its practical applications.
Article
Full-text available
Social learning is a collective approach to decentralised decision-making and is comprised of two processes; evidence updating and belief fusion. In this paper we propose a social learning model in which agents’ beliefs are represented by a set of possible states, and where the evidence collected can vary in its level of imprecision. We investigate this model using multi-agent and multi-robot simulations and demonstrate that it is robust to imprecise evidence. Our results also show that certain kinds of imprecise evidence can enhance the efficacy of the learning process in the presence of sensor errors.
Chapter
Large collectives of artificial agents are quickly becoming a reality at the micro-scale for healthcare and biological research, and at the macro-scale for personal care, transportation, and environmental monitoring. However, the design space of reactive collectives and the resulting emergent behaviors are not well understood, especially with respect to different sensing models. Our work presents a well-defined model and simulation for study of such collectives, extending the Braitenberg Vehicle model to multi-agent systems with on-board stimulus. We define omnidirectional and directional sensing and stimulus models, and examine the impact of the modelling choices. We characterize the resulting behaviors with respect to spatial and kinetic energy metrics over the collective, and identify several behaviors that are robust to changes in the sensor model and other parameters. Finally, we provide a demonstration of how this approach can be used for control of a swarm using a single controllable agent and global mode switching.
ResearchGate has not been able to resolve any references for this publication.