Article

A Robust Layered Control System for a Mobile Robot

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A new architecture for controlling mobile robots is described. Layers of control system are built to let the robot operate at increasing levels of competence. Layers are made up of asynchronous modules that communicate over low-bandwidth channels. Each module is an instance of a fairly simple computational machine. Higher-level layers can subsume the roles of lower levels by suppressing their outputs. However, lower levels continue to function as higher levels are added. The result is a robust and flexible robot control system. The system has been used to control a mobile robot wandering around unconstrained laboratory areas and computer machine rooms. Eventually it is intended to control a robot that wanders the office areas of our laboratory, building maps of its surroundings using an onboard arm to perform simple tasks.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... A complementary approach is to use evolutionary robotics techniques to evolve controllers (Doncieux et al., 2015;Kriesel et al., 2008), which often works because simple individual agents can produce complex swarm behaviors. Evolutionary robotics requires the designer to choose and aggregate from various controllers (e.g., state-machines (Ferrante et al., 2013;Brooks, 1986;Petrovic, 2008;Pintér-Bartha et al., 2012;König et al., 2009;Neupane et al., 2018), neural networks (Cliff et al., 1993;Lewis et al., 1992;Duarte et al., 2016;Trianni et al., 2003), behavior trees (Kucking et al., 2018;Kuckling et al., 2021)), evolutionary algorithms (e.g., genetic evolution (Kriesel et al., 2008;Brooks, 1986), grammatical evolution (Ferrante et al., 2013;Neupane & Goodrich, 2019a)), and fitness functions (Nelson et al., 2009). ...
... A complementary approach is to use evolutionary robotics techniques to evolve controllers (Doncieux et al., 2015;Kriesel et al., 2008), which often works because simple individual agents can produce complex swarm behaviors. Evolutionary robotics requires the designer to choose and aggregate from various controllers (e.g., state-machines (Ferrante et al., 2013;Brooks, 1986;Petrovic, 2008;Pintér-Bartha et al., 2012;König et al., 2009;Neupane et al., 2018), neural networks (Cliff et al., 1993;Lewis et al., 1992;Duarte et al., 2016;Trianni et al., 2003), behavior trees (Kucking et al., 2018;Kuckling et al., 2021)), evolutionary algorithms (e.g., genetic evolution (Kriesel et al., 2008;Brooks, 1986), grammatical evolution (Ferrante et al., 2013;Neupane & Goodrich, 2019a)), and fitness functions (Nelson et al., 2009). ...
... Since this paper addresses bio-inspired solutions, it emphasizes bottom-up approaches. For example, the subsumption architecture (Brooks, 1986) is a widely cited example for how complex behaviors can emerge by decomposing complex behaviors into layered sub-behaviors. Building on this decomposition philosophy, behavior fusion (Goodridge & Luo, 1994;Li & Feng, 1994) is a bottom-up approach that learns a weight or priority for each modular behavior. ...
Article
Full-text available
Grammatical evolution can be used to learn bio-inspired solutions to many distributed multiagent tasks, but the programs learned by the agents often need to be resilient to perturbations in the world. Biological inspiration from bacteria suggests that ongoing evolution can enable resilience, but traditional grammatical evolution algorithms learn too slowly to mimic rapid evolution because they utilize only vertical, parent-to-child genetic variation. The BeTr-GEESE grammatical evolution algorithm presented in this paper creates agents that use both vertical and lateral gene transfer to rapidly learn programs that perform one step in a multi-step problem even though the programs cannot perform all required subtasks. This paper shows that BeTr-GEESE can use online evolution to produce resilient collective behaviors on two goal-oriented spatial tasks, foraging and nest maintenance, in the presence of different types of perturbation. The paper then explores when and why BeTr-GEESE succeeds, emphasizing two potentially generalizable properties: modularity and locality. Modular programs enable real-time lateral transfer, leading to resilience. Locality means that the appropriate phenotypic behaviors are local to specific regions of the world (spatial locality) and that recently useful behaviors are likely to be useful again shortly (temporal locality). Finally, the paper modifies BeTr-GEESE to perform behavior fusion across multiple modular behaviors using activator and repressed conditions so that a fixed (non-evolving) population of heterogeneous agents is resilient to perturbations.
... Unlike point-to-point planning, CPP aims to maximize coverage while minimizing parameters like time to completion. Early methods involved behaviour-based approaches with heuristic and random elements [5,6]. Modern CPP algorithms often use cellular decomposition to achieve complete coverage by dividing the target region into cells, ensuring coverage within each cell [5,6]. ...
... Early methods involved behaviour-based approaches with heuristic and random elements [5,6]. Modern CPP algorithms often use cellular decomposition to achieve complete coverage by dividing the target region into cells, ensuring coverage within each cell [5,6]. This approach is crucial in applications such as agriculture and cleaning robotics, optimizing routes for efficient coverage, resulting in time and resource savings, and enhancing the operational efficiency of autonomous systems. ...
Article
Full-text available
INTRODUCTION: In integrating Spiral Coverage into Cellular Decomposition, which combines structured grid-based techniques with flexible, quick spiral traversal, time efficiency is increased. OBJECTIVES: In the field of robotics and computational geometry, the study proposes a comparative exploration of two prominent path planning methodologies-Boustrophedon Cellular Decomposition and the innovative Spiral Coverage. Boustrophedon coverage has limitations in time efficiency due to its back-and-forth motion pattern, which can lead to lengthier coverage periods, especially in congested areas. Nevertheless, it is useful in some situations. It is critical to address these time-related issues to make Boustrophedon algorithms more useful in practical settings. METHODS: The research centres on achieving comprehensive cell coverage, addressing the complexities arising from confined spaces and intricate geometries. While conventional methods emphasise route optimization between points, the coverage path planning approach seeks optimal paths that maximize coverage and minimize associated costs. This study delves into the theory, practical implementation, and application of Spiral Coverage integrated with established cellular decomposition techniques. RESULTS: Through comparative analysis, it illustrates the advantages of spiral coverage over boustrophedon coverage in diverse robotics and computational applications. The research highlights Spiral Coverage's superiority in terms of path optimization, computational efficiency, and adaptability, proposing a novel perspective into cell decomposition. The methodology integrates the Spiral Coverage concept, transcending traditional techniques reliant on grids or Voronoi diagrams. Rigorous evaluation validates its potential to enhance path planning, exemplifying a substantial advancement in robotics and computational geometry. CONCLUSION: Our findings show that spiral coverage is on an average 45% more efficient than conventional Boustrophedon coverage. This paper set the basis for the future work on how different algorithms can traverse different shapes more efficiently.
... Furthermore, the level of thinking can be located on an axis with at least two levels. These include a shallow automatic level without careful thinking (fast process) and a deep deliberative level that requires time to carefully think (slow process) (Brooks, 1986;Evans, 2003;Kahneman, 2011). ...
... To examine the internal environment that stimulates intellectual curiosity, we manipulated the strategy of exploring the external environment in terms of different levels of thinking (Brooks, 1986;Evans, 2003;Kahneman, 2011). As explained in Section 2.1, human mental activities are traditionally divided ...
Article
Full-text available
Studies on reinforcement learning have developed the representation of curiosity, which is a type of intrinsic motivation that leads to high performance in a certain type of tasks. However, these studies have not thoroughly examined the internal cognitive mechanisms leading to this performance. In contrast to this previous framework, we propose a mechanism of intrinsic motivation focused on pattern discovery from the perspective of human cognition. This study deals with intellectual curiosity as a type of intrinsic motivation, which finds novel compressible patterns in the data. We represented the process of continuation and boredom of tasks driven by intellectual curiosity using “pattern matching,” “utility,” and “production compilation,” which are general functions of the adaptive control of thought-rational (ACT-R) architecture. We implemented three ACT-R models with different levels of thinking to navigate multiple mazes of different sizes in simulations, manipulating the intensity of intellectual curiosity. The results indicate that intellectual curiosity negatively affects task completion rates in models with lower levels of thinking, while positively impacting models with higher levels of thinking. In addition, comparisons with a model developed by a conventional framework of reinforcement learning (intrinsic curiosity module: ICM) indicate the advantage of representing the agent's intention toward a goal in the proposed mechanism. In summary, the reported models, developed using functions linked to a general cognitive architecture, can contribute to our understanding of intrinsic motivation within the broader context of human innovation driven by pattern discovery.
... Managing interoperability for both contemporary systems and older facilities that rely on data collection and transmission, including the Internet of Things (IoT) and Cyber-Physical Systems (CPS), is made easier using this method. The advent of advanced sensor technologies has enabled robots to perceive and interact with their environment in increasingly sophisticated ways, paving the way for enhanced autonomy and intelligence [1] [9]. At the heart of this evolution lies the sensor controller, a critical component responsible for orchestrating the flow of real-time data from several sensors to the central processing unit [9]. ...
... At the heart of this evolution lies the sensor controller, a critical component responsible for orchestrating the flow of real-time data from several sensors to the central processing unit [9]. The sensor controller serves as the bridge between the physical world and the digital realm, facilitating communication between the various sensors scattered throughout the robotic ecosystem and the central control unit [1]. Here, a potent and adaptable computing device serves as the central nervous system, utilizing the flood of sensor data to make deft decisions and carry out exact activities [7]. ...
Conference Paper
Full-text available
This paper presents the design and development of a sensor controller, focusing on efficient data handling for integrated sensors utilized in robotic applications. In this study, we describe an extensive analysis of data collected with various sensors using multiple communication interfaces, such as USB Serial, I2C, and UART, which are used to transmit control commands and sensor data. A Teensy 4.1 microcontroller serves as the sensor controller in this system, enabling communication between the sensor controller and the system master controller. The study provides comprehensive environmental perception and control in robotics with the integration of ultrasonic sensors, current sensors, IMU, and hall sensors. To guarantee dependable sensor data acquisition and system functionality, the testing approach evaluates both manual and protocol-based communication for each sensor. This work demonstrates the simultaneous data reception made possible by the integration of several sensors with different protocols into a single microcontroller. The process of evaluating accuracy required contrasting human measurements with protocol-based communication for a range of sensors, including IMUs (orientation detection), current sensors (current flow measurement), and ultrasonic sensors (distance measurement). The integrated sensor system was put to the test using a variety of measuring techniques, including software-based and manual protocols, as part of the validation process. The precision of individual sensors is demonstrated by the results, which validate the correctness of the data acquired. The results of the experiment provide a detailed comparison between data provided by software and manual measurements, confirming the dependability and efficiency of the integrated sensor system. From the experiment results it is clear that the accuracy of the system is above 90%.
... It also denotes a mechanism that directly associates sensory input with control command without requiring prior knowledge. Reflex-based robot navigation has been developed for a long time [17][18][19][20][21][22]. They have been mainly studied from the perspectives of fuzzy logic [20,21], subsumption architecture [17,18], morphological computation [22], and reinforcement learning [20]. ...
... Reflex-based robot navigation has been developed for a long time [17][18][19][20][21][22]. They have been mainly studied from the perspectives of fuzzy logic [20,21], subsumption architecture [17,18], morphological computation [22], and reinforcement learning [20]. On the other hand, their primary purposes are the movement to target positions represented as coordinates and collision avoidance, and open-vocabulary navigation is far from their scope. ...
Preprint
Various robot navigation methods have been developed, but they are mainly based on Simultaneous Localization and Mapping (SLAM), reinforcement learning, etc., which require prior map construction or learning. In this study, we consider the simplest method that does not require any map construction or learning, and execute open-vocabulary navigation of robots without any prior knowledge to do this. We applied an omnidirectional camera and pre-trained vision-language models to the robot. The omnidirectional camera provides a uniform view of the surroundings, thus eliminating the need for complicated exploratory behaviors including trajectory generation. By applying multiple pre-trained vision-language models to this omnidirectional image and incorporating reflective behaviors, we show that navigation becomes simple and does not require any prior setup. Interesting properties and limitations of our method are discussed based on experiments with the mobile robot Fetch.
... The second stage is the Reactive agents stage. Unlike symbolic agents, reactive agents are centered on the immediate perception and response of the agent to environmental changes (R. A. Brooks 1991;Maes 1990;Nilsson 1992). Unlike symbolic intelligences, which focus on symbolic manipulation and complex logical reasoning, reactive intelligences focus on establishing a direct mapping between inputs and environmental stimuli as well as between outputs and behavioral responses (R. Brooks 1986;Schoppers 1987). The goal is to achieve accu ...
... Unlike symbolic agents, reactive agents are centered on the immediate perception and response of the agent to environmental changes (R. A. Brooks 1991;Maes 1990;Nilsson 1992). Unlike symbolic intelligences, which focus on symbolic manipulation and complex logical reasoning, reactive intelligences focus on establishing a direct mapping between inputs and environmental stimuli as well as between outputs and behavioral responses (R. Brooks 1986;Schoppers 1987). The goal is to achieve accurate and rapid responses with minimal computational resources. ...
Preprint
In various industrial fields of human social development, people have been exploring methods aimed at freeing human labor. Constructing LLM-based agents is considered to be one of the most effective tools to achieve this goal. Agent, as a kind of human-like intelligent entity with the ability of perception, planning, decision-making, and action, has created great production value in many fields. However, the bridge O\&M field shows a relatively low level of intelligence compared to other industries. Nevertheless, the bridge O\&M field has developed numerous intelligent inspection devices, machine learning algorithms, and autonomous evaluation and decision-making methods, which provide a feasible basis for breakthroughs in artificial intelligence in this field. The aim of this study is to explore the impact of AI bodies based on large-scale language models on the field of bridge O\&M and to analyze the potential challenges and opportunities it brings to the core tasks of bridge O\&M. Through in-depth research and analysis, this paper expects to provide a more comprehensive perspective for understanding the application of intelligentsia in this field.
... Vibration resistance , where is the natural frequency, is stiffness, and is mass. [92] 31 Signal attenuation , , ...
Article
Full-text available
This study presents the development of a conceptual model for an autonomous underwater vehicle (AUV) information and control system (ICS) tailored for the mineral and raw materials complex (MRMC). To address the challenges of underwater mineral exploration, such as harsh conditions, high costs, and personnel risks, a comprehensive model was designed. This model was built using correlation analysis and expert evaluations to identify critical parameters affecting AUV efficiency and reliability. Key elements, including pressure resistance, communication stability, energy efficiency, and maneuverability, were prioritized. The results indicate that enhancing these elements can significantly improve AUV performance in deep-sea environments. The proposed model optimizes the ICS, providing a foundation for designing advanced AUVs capable of efficiently executing complex underwater tasks. By integrating these innovations, the model aims to boost operational productivity, ensure safety, and open new avenues for mineral resource exploration. This study’s findings highlight the importance of focusing on critical AUV parameters for developing effective and reliable solutions, thus addressing the pressing needs of the MRMC while promoting sustainable resource management.
... Studies Waterfall [13, 19, 20, 22, 29, 33, 50, 60, 62, 68, 74, 78, 83, 93, 94, 99-101, 123, 131, 135, 141, 146] Agile [38,39,91] [14,16,35,42,58,75,93,97,118,122,124,131,139,143] Tool [4, 9, 10, 19, 22, 25, 31, 36, 40, 51, 66, 74, 75, 80, 87, 90, 112, 115-117, 134, 137, 140, 147] Concept [1-3, 17, 18, 20, 23, 33, 39, 43, 45, 59, 61, 64, 72, 73, 76-78, 82, 85, 91, 94, 96, 100, 112, 114, 120, 128, 132, 135, 146, 151] Language [34,49,50, 108] Formalism [7,44,47,79,86,88,99,123,136,149,150] Architecture [4,13,13,19,21,27,31,32,56,62,69,84,95,104,111,117,121,127,134,144,145] ...
Article
Cyber-physical systems (CPS), robotics, the Internet of Things (IoT), and automotive systems are integral to modern technology. They are characterized by their safety criticality, accuracy, and real-time control requirements. Control software plays a crucial role in achieving these objectives by managing and coordinating the operations of various sub-systems. This paper presents a novel systematic mapping study (SMS) for control software engineering, analyzing 115 peer-reviewed papers. The study identifies, classifies, and maps existing solutions, providing a comprehensive and structured overview for practitioners and researchers. Our contributions include (i) a unique classification of literature into six research themes—engineering phases, engineering approaches, engineering paradigms, engineering artefacts, target application domains, and engineering concerns; (ii) insights into the specificity of approaches to target technologies and phases; (iii) the prominence of model-driven approaches for design and testing; (iv) the lack of end-to-end engineering support in existing approaches; and (v) the emerging role of agile-based methods versus the dominance of waterfall-based methods. This paper’s significance lies in its thorough analysis and the high-level mapping of the solution space, offering new perspectives and a detailed roadmap for future research and innovation in control software engineering. The findings will guide advancements and best practices in the field, underscoring the paper’s impact.
... Arbitration graphs originated in the context of robot soccer [1], integrating ideas from Brooks' behavior-based subsumption [13], knowledge-based architectures like Belief-Desire-Intention (BDI) [14], and programming paradigms such as object-oriented programming [15]. ...
Preprint
Full-text available
This paper introduces an extension to the arbitration graph framework designed to enhance the safety and robustness of autonomous systems in complex, dynamic environments. Building on the flexibility and scalability of arbitration graphs, the proposed method incorporates a verification step and structured fallback layers in the decision-making process. This ensures that only verified and safe commands are executed while enabling graceful degradation in the presence of unexpected faults or bugs. The approach is demonstrated using a Pac-Man simulation and further validated in the context of autonomous driving, where it shows significant reductions in accident risk and improvements in overall system safety. The bottom-up design of arbitration graphs allows for an incremental integration of new behavior components. The extension presented in this work enables the integration of experimental or immature behavior components while maintaining system safety by clearly and precisely defining the conditions under which behaviors are considered safe. The proposed method is implemented as a ready to use header-only C++ library, published under the MIT License. Together with the Pac-Man demo, it is available at github.com/KIT-MRT/arbitration_graphs.
... These sensors can generate 2D or 3D range/direction representations of environments, often referred to as energyscapes [10], and have proven their application potential in mobile robotics. One interesting approach to mobile robotics using ultrasound is based on the subsumption architecture [11] and uses an analog to the widely known optical flow which is called acoustic flow [12][13][14]. Acoustic flow is an approach to using expected transformations in the sensor observations, based on the robot's ego-motion data, to control a robot in a desired manner without explicit object segmentation. ...
Article
Full-text available
The predictive brain hypothesis suggests that perception can be interpreted as the process of minimizing the error between predicted perception tokens generated via an internal world model and actual sensory input tokens. When implementing working examples of this hypothesis in the context of in-air sonar, significant difficulties arise due to the sparse nature of the reflection model that governs ultrasonic sensing. Despite these challenges, creating consistent world models using sonar data is crucial for implementing predictive processing of ultrasound data in robotics. In an effort to enable robust robot behavior using ultrasound as the sole exteroceptive sensor modality, this paper introduces EchoPT (Echo-Predicting Pretrained Transformer), a pretrained transformer architecture designed to predict 2D sonar images from previous sensory data and robot ego-motion information. We detail the transformer architecture that drives EchoPT and compare the performance of our model to several state-of-the-art techniques. In addition to presenting and evaluating our EchoPT model, we demonstrate the effectiveness of this predictive perception approach in two robotic tasks.
... One of the reasons why the vision of capable and safe robotic agents is still out of reach is the lack of commonsense reasoning, and insufficient understanding of contextual information. A robot can only perceive 1 what it has been programmed to, either explicitly by creating some form of world model (Besl and Jain 1985) using some perception pipeline (e.g. computer vision algorithms), or implicitly by reacting to the stimulus coming from the sensors (Brooks 1986). In both cases, the range of possible interpretation of the sensory information is limited by the implicit and explicit knowledge that the agent is equipped with. ...
Article
Autonomous robotic systems depend on their perception and understanding of their environment for informed decision-making. One of the goals of the Semantic Web is to make knowledge on the Web machine-readable, which can significantly aid robots by providing background knowledge, and thereby support their understanding. In this paper, we present a reasoning system that uses the Ontology for Robotic Knowledge Acquisition (ORKA) to integrate the sensory data and perception algorithms of the robot, thereby enhancing its autonomous capabilities. This reasoning system is subsequently employed to retrieve and integrate information from the Semantic Web, thereby improving the robot's comprehension of its environment. To achieve this, the system employs a Perceived-Entity Linking (PEL) pipeline that associates regions in the sensory data of the robotic agent with concepts in a target knowledge graph. As a use-case for the linking process, the Perceived-Entity Typing task is used to determine the more fine-grained subclass of the perceived entities. Specifically, we provide an analysis of the performance of different knowledge graph embedding methods on the task using a annotated observations and WikiData as a target knowledge graph. The experiments indicate that relying on pre-trained embedding methods results in an increased performance when using TransE as the embedding method for the observations of the robot. This contribution advances the field by demonstrating the potential of integrating Semantic Web technologies with robotic perception, thereby enabling more nuanced and context-aware decision-making in autonomous systems.
... Studies Waterfall [13, 19, 20, 22, 29, 33, 50, 60, 62, 68, 74, 78, 83, 93, 94, 99-101, 123, 131, 135, 141, 146] Agile [38,39,91] [14,16,35,42,58,75,93,97,118,122,124,131,139,143] Tool [4, 9, 10, 19, 22, 25, 31, 36, 40, 51, 66, 74, 75, 80, 87, 90, 112, 115-117, 134, 137, 140, 147] Concept [1-3, 17, 18, 20, 23, 33, 39, 43, 45, 59, 61, 64, 72, 73, 76-78, 82, 85, 91, 94, 96, 100, 112, 114, 120, 128, 132, 135, 146, 151] Language [34,49,50, 108] Formalism [7,44,47,79,86,88,99,123,136,149,150] Architecture [4,13,13,19,21,27,31,32,56,62,69,84,95,104,111,117,121,127,134,144,145] ...
Article
Dynamic control software reconfiguration for the Internet of Things (IoT) and cyber-physical systems (CPS) is crucial for adaptable and efficient automation. This paper presents a knowledge-driven architecture enabling dynamic device reconfiguration using the Web Ontology Language (OWL) and Terse Triple Language (TTL) formats. Key components include a capability ontology, session-type information for sequencing and concurrent operations, and an Integrated Development Environment (IDE) for automated control design. The capability ontology standardizes machine capabilities, facilitating device integration based on their capabilities, while session-type information ensures correct sequencing and synchronization of machine functions. The IDE platform supports dynamic reconfiguration by automating device selection, control strategy formulation, and system adjustments across diverse use cases. The architecture has been validated in real-world scenarios, including smart meeting rooms, warehouse automation, and energy management, showing a reduction in manual configuration time (up to 50%), development time (86% in some cases), and error rates (30%). Benchmarking results indicate faster code generation (40% improvement) and efficient component integration across different CPS environments. Challenges like computational complexity, scalability, and integration with existing systems highlight limitations. Future research will explore further optimizations and broader applicability to ensure low-latency, high-accuracy, and seamless integration in complex CPS. This work advances dynamic control software reconfiguration by providing a flexible solution that enhances CPS reliability and efficiency through a knowledge-driven approach.
... Performing wholebody control on humanoid hardware is a long-standing challenge in robotics due to the complex structure of humanoid robots. Before the rise in popularity of learning-based controllers, classical humanoid controllers [24][25][26][27][28][29][30][31][32][33] often use a hierarchical model-based optimization to solve for the low-level torque or position commands sent to hardware motors, where actuator-level dynamics on single joints are abstracted to multi-joint [26] or whole-body [27] controllers. Learning-based controllers follow the same design pattern in spirit, where high-level inputs are translated into lowlevel motor commands via neural networks. ...
Preprint
Humanoid whole-body control requires adapting to diverse tasks such as navigation, loco-manipulation, and tabletop manipulation, each demanding a different mode of control. For example, navigation relies on root velocity tracking, while tabletop manipulation prioritizes upper-body joint angle tracking. Existing approaches typically train individual policies tailored to a specific command space, limiting their transferability across modes. We present the key insight that full-body kinematic motion imitation can serve as a common abstraction for all these tasks and provide general-purpose motor skills for learning multiple modes of whole-body control. Building on this, we propose HOVER (Humanoid Versatile Controller), a multi-mode policy distillation framework that consolidates diverse control modes into a unified policy. HOVER enables seamless transitions between control modes while preserving the distinct advantages of each, offering a robust and scalable solution for humanoid control across a wide range of modes. By eliminating the need for policy retraining for each control mode, our approach improves efficiency and flexibility for future humanoid applications.
... The lecture covered topics such as systems engineering, embedded software, and robotics in theory, using the platform as a practical illustration. During the exercises, students acquired a comprehensive understanding of the platform, from setting up the software stack to implementing both low-level (e.g., motor control) and high-level control strategies (e.g., implementation of robotic control architectures like those described in [12]) using various hardware modules. The exercises were designed for students to work in pairs, and simulation tools like Gazebo were not originally planned for use. ...
... While localization remains an important skill for any robot, there is a long-history of highly effective localization-free systems (Brooks 1986;Kinzer 2009;Bennett 2021). By wandering in space, such systems are capable of completing a wide range of tasks, especially coverage-based, such as vacuum cleaning (Bennett 2021) or patrolling. ...
Article
Full-text available
Recently, there has been great interest in deploying autonomous mobile robots in airports, malls, and hospitals to complete a range of tasks such as delivery, cleaning, and patrolling. The rich context of these environments gives rise to highly unstructured motion that is challenging for robots to anticipate and adapt to. This results in uncomfortable and unsafe human–robot encounters, poor robot performance, and even catastrophic failures that hinder robot acceptance. Such observations have motivated my work on social robot navigation, the problem of enabling robots to navigate in human environments while accounting for human safety and comfort. In this article, I highlight prior work on expanding the classical autonomy stack with mathematical models and algorithms designed to contribute towards smoother mobile robot deployments in complex environments.
... In the final part of this contribution, we consider cognitive architectures that have been investigated in robotics to explore the relationship between sensorimotor activity, language, narrative and memory. An important characteristic of many robot control systems is the use of layered cognitive architectures [62][63][64][65]. Layering provides the capacity to co-ordinate responses on different timescales, with different forms and depths of internal processing, and provides robustness through the presence of multiple solutions [51,66]. ...
Article
Full-text available
Episodic memories are experienced as belonging to a self that persists in time. We review evidence concerning the nature of human episodic memory and of the sense of self and how these emerge during development, proposing that the younger child experiences a persistent self that supports a subjective experience of remembering. We then explore recent research in cognitive architectures for robotics that has investigated the possibility of forms of synthetic episodic and autobiographical memory. We show that recent advances in generative modeling can support an understanding of the emergence of self and of episodic memory, and that cognitive architectures which include a language capacity are showing progress towards the construction of a narrative self with autobiographical memory capabilities for robots. We conclude by considering the prospects for a more complete model of mental time travel in robotics and the implications of this modeling work for understanding human episodic memory and the self in time. This article is part of the theme issue ‘Elements of episodic memory: lessons from 40 years of research’.
... associated to Jacobian matrices, mapping the velocity commands associated to the subtasks of lower priorities in the null space of the Jacobian matrix associated to the subtask of highest priority. The result is an analytical implementation of the classical subsumption architecture proposed by Brooks [31], with proven stability [28,32], which is quite important when regarding a switching control system. In this work the subtasks to move the formation and to avoid an obstacle in the floor, or to move the formation and to avoid an obstacle in the air, are the possible pairs of concurrent subtasks. ...
Article
Full-text available
A control system based on the control paradigm of virtual structure is here proposed for a multi-robot system involving a quadrotor and a ground vehicle, operating in an automated warehouse. The ground robot can either provide extra power to the quadrotor, thus increasing its autonomy, or receive data from it. Therefore, the quadrotor is tethered to the ground robot through flexible cables, thus justifying the adoption of the virtual structure control paradigm, which allows controlling the two vehicles simultaneously. The control approach adopted aims at guiding the virtual vertical line joining the two robots to allow the quadrotor to produce an inventory of goods in an automated warehouse. Therefore, the two robots should visit a sequence of known positions, in front of cabinets of vertically arranged shelves. In each of them the quadrotor should read QR codes, bar-codes or RFID cards corresponding to the stored boxes, to produce the inventory. Therefore, the control objective, the focus of this paper, is to keep the shape of the virtual vertical line linking the two robots while moving. However, when an obstacle appears in the route, such as a box or other robot in the floor or another aerial robot, the formation changes its shape accordingly, to avoid the obstacle. An experiment in lab scale, mimicking a real situation, is run, whose results allow claiming that the proposed system is an effective solution for the problem of controlling a multi-robot system to produce an inventory in an automated warehouse.
... Inspired by the reactive behaviors observed in living organisms, this method utilizes a collection of independent software modules referred to as behaviors. This Behavior-based formation control idea was first introduced by [58] in 1986 but the demonstration and the applications were first given by Balch T. in his PhD thesis in 1998 [59] where he implemented the behavior-based approach in mobile robots that integrated with other navigational behaviors to enable a robotic team to reach navigational goals, avoid hazards, and simultaneously remain in formation. This method uses a weighted hybrid of different mission objectives to generate vehicle control inputs. ...
Thesis
Full-text available
The rapid advancement of unmanned vehicle technology has been significantly propelled by various innovative control methods. These technological improvements not only enhance transportation systems but also foster collaboration among multiple vehicles, thereby alleviating traffic congestion and improving road safety. Formation control, an emerging concept in vehicle operations, has garnered considerable attention due to its broad applicability. Researchers have extensively worked to integrate formation control across various sectors, including mobile robots, aerial and ground vehicles, surface vessels, and underwater vehicles. This comprehensive effort underscores the versatility and potential of formation control, positioning it as a transformative technology in unmanned vehicle systems. As research and implementation efforts continue, the capability of formation control to revolutionize operational methodologies and introduce innovative transportation solutions is becoming increasingly evident. In the realm of unmanned or autonomous vehicle coordination, the Virtual Structure Approach (VSA) offers significant advantages over traditional leader-following and artificial potential field methods. The VSA provides geometric structures that facilitate cohesive group movement. However, ensuring obstacle-free navigation remains a critical challenge, necessitating that vehicles avoid obstacles while maintaining coordination with group members. This study introduces a formation control technique based on the Virtual Structure Approach, integrating an improved A* path planning algorithm and the Monte Carlo Localization (MCL) technique. By leveraging the VSA, this method establishes a virtual center among four autonomous vehicles, forming a rectangular configuration to guide them from the start point to the endpoint. The improved A* path planning algorithm is employed to identify obstacle-free routes. To achieve localization and obstacle avoidance, the vehicles utilize a 2D LiDAR sensor coupled with the MCL method. Additionally, a Pure Pursuit controller is used to manage the trajectory tracking of the vehicle group. To validate the accuracy of the proposed method, MATLAB-generated binary maps were utilized to observe the movement and path planning strategy of the improved A* algorithm within the vehicle group. Efficiency comparisons with existing formation control techniques were conducted to evaluate the method's effectiveness. The improved A* algorithm demonstrated a maximum runtime reduction of 41.63% and a path distance reduction of 29.31% in various maps. This approach shows promise for enhancing unmanned vehicle operations, with potential applications extending beyond transportation. The main innovations of the proposed research include: 1. Forming a geometric rectangular shape using the virtual structure approach to maintain group formation during movement. 2. Enhancing the traditional A* path planning algorithm by optimizing nodes and paths through an evaluation function that indicates obstacle-free routes for the vehicle group. 3. Avoid collisions between obstacles and group vehicles by employing the MCL technique, which uses LiDAR sensor data to help vehicles understand their surroundings and prevent collisions. In short, the virtual structure-based formation control method presents a promising approach for guiding groups of vehicles. This method, which combines advanced technology with intelligent algorithms, has the potential to transform the operation of unmanned vehicles. The fact that it has been rigorously tested and compared with other methods underscores its practical utility, not only in transportation but also in addressing various challenges in the future development of the autonomous vehicle industry.
... Control architectures and hierarchies play a critical role in robotics. The introduction of behavior-based robotics, for instance, is considered to be a seminal moment in robotics [84], [85]. In these frameworks, a set of low-level policies form the basic behavioral building blocks which are switched or combined in order to synthesize complex robot control patterns. ...
Preprint
Full-text available
Achieving human-level speed and performance on real world tasks is a north star for the robotics research community. This work takes a step towards that goal and presents the first learned robot agent that reaches amateur human-level performance in competitive table tennis. Table tennis is a physically demanding sport which requires human players to undergo years of training to achieve an advanced level of proficiency. In this paper, we contribute (1) a hierarchical and modular policy architecture consisting of (i) low level controllers with their detailed skill descriptors which model the agent's capabilities and help to bridge the sim-to-real gap and (ii) a high level controller that chooses the low level skills, (2) techniques for enabling zero-shot sim-to-real including an iterative approach to defining the task distribution that is grounded in the real-world and defines an automatic curriculum, and (3) real time adaptation to unseen opponents. Policy performance was assessed through 29 robot vs. human matches of which the robot won 45% (13/29). All humans were unseen players and their skill level varied from beginner to tournament level. Whilst the robot lost all matches vs. the most advanced players it won 100% matches vs. beginners and 55% matches vs. intermediate players, demonstrating solidly amateur human-level performance. Videos of the matches can be viewed at https://sites.google.com/view/competitive-robot-table-tennis
... Using terrest ria l robots as a n exa mple, some of t he e arlies t s ucces sfu l wa l kin g robots [e.g., Gen ghis ( Angle 1989 ), Robot II ( Espenschied et al. 1996), ASIM O ( Sakaga mi et al . 2002, et c.] relied on react ive cont rol lers using a simple arc hit ecture of nested fe e db ack ( Brooks 1986 ) As they becom e m ore capable , suc h robots c an serve as pl atforms t o syst emat ica l l y exp lore an d test hypoth eses about different neural arc hit ectures co mmo nly found in b iological or ganism s ( Ijspeert 2014( Ijspeert , 2008Ramdya and Ijspeert 2023 ). Mor e r ecently, r o botic sys tem s are beginnin g to demonstrate an impressive c apacit y for motor le ar ning th at en ables ag i le maneuv ers ov er ha rsh ( Hwa ngbo et al. 2019 ;Lee et al. 2020 ) and deform able n atural terrains ( Choi et al. 2023 ;Guizzo 2019 ) and even com pensa te for leg loss ( Cu l ly et a l. ...
Article
Synopsis Whether walking, running, slithering, or flying, organisms display a remarkable ability to move through complex and uncertain environments. In particular, animals have evolved to cope with a host of uncertainties—both of internal and external origin—to maintain adequate performance in an ever-changing world. In this review, we present mathematical methods in engineering to highlight emerging principles of robust and adaptive control of organismal locomotion. Specifically, by drawing on the mathematical framework of control theory, we decompose the robust and adaptive hierarchical structure of locomotor control. We show how this decomposition along the robust–adaptive axis provides testable hypotheses to classify behavioral outcomes to perturbations. With a focus on studies in non-human animals, we contextualize recent findings along the robust–adaptive axis by emphasizing two broad classes of behaviors: (1) compensation to appendage loss and (2) image stabilization and fixation. Next, we attempt to map robust and adaptive control of locomotion across some animal groups and existing bio-inspired robots. Finally, we highlight exciting future directions and interdisciplinary collaborations that are needed to unravel principles of robust and adaptive locomotion.
... To ensure stability, we define the commander agent as a reactive agent rather than an LLM-based one. This design focuses on direct input-output mappings instead of complex reasoning, allowing it to respond to the environment in a stable manner [1], [48], [49], [50], [51]. Following Definition 1, the commander agent should be defined as A cm = ⟨B ct , I ct , Act ct ⟩. ...
Preprint
Full-text available
Recent advancements have significantly improved automated task-solving capabilities using autonomous agents powered by large language models (LLMs). However, most LLM-based agents focus on dialogue, programming, or specialized domains, leaving gaps in addressing generative AI safety tasks. These gaps are primarily due to the challenges posed by LLM hallucinations and the lack of clear guidelines. In this paper, we propose Atlas, an advanced LLM-based multi-agent framework that integrates an efficient fuzzing workflow to target generative AI models, specifically focusing on jailbreak attacks against text-to-image (T2I) models with safety filters. Atlas utilizes a vision-language model (VLM) to assess whether a prompt triggers the T2I model's safety filter. It then iteratively collaborates with both LLM and VLM to generate an alternative prompt that bypasses the filter. Atlas also enhances the reasoning abilities of LLMs in attack scenarios by leveraging multi-agent communication, in-context learning (ICL) memory mechanisms, and the chain-of-thought (COT) approach. Our evaluation demonstrates that Atlas successfully jailbreaks several state-of-the-art T2I models in a black-box setting, which are equipped with multi-modal safety filters. In addition, Atlas outperforms existing methods in both query efficiency and the quality of the generated images.
... Thus, skilled behavior in the environment can be understood to form an "extended intentional state" (Malafouris, 2013, p. 142). Rather than conceptualize intentionality as the non-derived mental contents of individuals we propose an enactive intentionality that emerges through dynamic transactions with a world which acts as its own best (nonrepresentational) model (Brooks, 1986). ...
Article
Full-text available
This paper aims to place the general thesis for a species-unique “shared” or “we” intentionality, against the theoretical background of the material engagement approach. We will argue that the human ability to enact and share intentions rests upon a relational and participatory foundation of situated activity where intentional transactions between humans as well as between humans and things (in the broadest sense of material environment) are inseparable from the situational affordances of their engagement. Based on that, the paper advocates for an ecological-enactive account of shared intentionality which understands material engagement as central to the evolution and development of human social cognition.
... Subsumption architecture: As proposed by [247], the subsumption architecture presents a real-time control option, as an alternative to the sense-plan-act paradigm. In this architecture, higher-level behaviours exert control over lower-level ones, facilitating the delegation of minor tasks to lower levels. ...
Preprint
Full-text available
With the advancements in human-robot interaction (HRI), robots are now capable of operating in close proximity and engaging in physical interactions with humans (pHRI). Likewise, contact-based pHRI is becoming increasingly common as robots are equipped with a range of sensors to perceive human motions. Despite the presence of surveys exploring various aspects of HRI and pHRI, there is presently a gap in comprehensive studies that collect, organize and relate developments across all aspects of contact-based pHRI. It has become challenging to gain a comprehensive understanding of the current state of the field, thoroughly analyze the aspects that have been covered, and identify areas needing further attention. Hence, the present survey. While it includes key developments in pHRI, a particular focus is placed on contact-based interaction, which has numerous applications in industrial, rehabilitation and medical robotics. Across the literature, a common denominator is the importance to establish a safe, compliant and human intention-oriented interaction. This endeavour encompasses aspects of perception, planning and control, and how they work together to enhance safety and reliability. Notably, the survey highlights the application of data-driven techniques: backed by a growing body of literature demonstrating their effectiveness, approaches like reinforcement learning and learning from demonstration have become key to improving robot perception and decision-making within complex and uncertain pHRI scenarios. As the field is yet in its early stage, these observations may help guide future developments and steer research towards the responsible integration of physically interactive robots into workplaces, public spaces, and elements of private life.
Chapter
Full-text available
Toward a future symbiotic society with Cybernetic Avatars (CAs), it is crucial to develop socially well-accepted CAs and to discuss legal, ethical, and socioeconomic issues to update social rules and norms. This chapter provides interdisciplinary discussions for these issues from the perspectives of technological and social sciences. First, we propose avatar social implementation guidelines and present studies that contribute to the development of socially well-accepted CAs. The second part of this chapter addresses the ethical and legal issues in installing CAs in society and discusses solutions for them.
Article
Full-text available
Major Depressive Disorder (MDD) is a complex, heterogeneous condition affecting millions worldwide. Computational neuropsychiatry offers potential breakthroughs through the mechanistic modeling of this disorder. Using the Kolmogorov theory (KT) of consciousness, we developed a foundational model where algorithmic agents interact with the world to maximize an Objective Function evaluating affective valence. Depression, defined in this context by a state of persistently low valence, may arise from various factors—including inaccurate world models (cognitive biases), a dysfunctional Objective Function (anhedonia, anxiety), deficient planning (executive deficits), or unfavorable environments. Integrating algorithmic, dynamical systems, and neurobiological concepts, we map the agent model to brain circuits and functional networks, framing potential etiological routes and linking with depression biotypes. Finally, we explore how brain stimulation, psychotherapy, and plasticity-enhancing compounds such as psychedelics can synergistically repair neural circuits and optimize therapies using personalized computational models.
Article
On this 30th anniversary of the founding of the Artificial Life journal, I share some personal reflections on my own history of engagement with the field, my own particular assessment of its current status, and my vision for its future development. At the very least, I hope to stimulate some necessary critical conversations about the field of Artificial Life and where it is going.
Preprint
Integral feedback control strategies have proven effective in regulating protein expression in unpredictable cellular environments. These strategies, grounded in model-based designs and control theory, have advanced synthetic biology applications. Autocatalytic integral feed-back controllers, utilizing positive autoregulation for integral action, are particularly promising due to their similarity to natural behaviors like self-replication and positive feedback seen across biological scales. However, their effectiveness is often hindered by resource competition and context-dependent couplings. This study addresses these challenges with a multi-layer feedback strategy, enabling population-level integral feedback and multicellular integrators. We provide a generalized mathematical framework for modeling resource competition in complex genetic networks, supporting the design of intracellular control circuits. Our controller motif demonstrated precise regulation in tasks ranging from gene expression control to population growth in multi-strain communities. We also explore a variant capable of ratiometric control, proving its effectiveness in managing gene ratios and co-culture compositions in engineered microbial ecosystems. These findings offer a versatile approach to achieving robust adaptation and homeostasis from subcellular to multicellular scales.
Article
Full-text available
In architectural and construction robotics research, we now have powerful technologies whose histories are only partially understood. Their ubiquity is matched by persistent historical narratives around their invention that have built up over time through repetition. Appearing in historical surveys and background research for theses and dissertations, the narratives of these tools are infrequently challenged—a situation that has implications for the conception and execution of the research projects that employ them. How do we begin to center narrative and politics in the context of a specialized area of research like construction robotics? In this investigation, we interrogate a set of iconic and influential robotics projects to expand the knowledge base around them and avoid inadvertently perpetuating harmful practices: Ross Ashby’s Homeostat, Grey Walter’s Tortoises, George Devol’s Programmed Article Transfer (Unimate), and Stanford Research Institute’s Mobile Automaton (Shakey). To arrive at a different understanding of these familiar works, we propose an alternative framework—a reconfiguration of definitions of efficiency and utility that we refer to as “robot excess.” Employing the novel method of movement as a hermeneutic device to examine these, we find that certain movements were interpreted as valuable and worthy of study and documentation, while others were considered excessive and, therefore, practically irrelevant. Further, we show that the observation, characterization, and interpretation of these excess movements relied as much on qualitative factors—in conjunction with the narratives we uncover—as on the definition and quantification of traditional machine attributes like efficiency or utility. This research aims to uncover less conventional takes on some commonplace historical narratives and, through doing so, to foster more informed (and inclusive) approaches to the implementation of constantly evolving technologies.
Article
Reinforcement learning behavioral control (RLBC) is limited to an individual agent without any swarm mission, because it models the behavior priority learning as a Markov decision process. In this paper, a novel multi-agent reinforcement learning behavioral control (MARLBC) method is proposed to overcome such limitations by implementing joint learning. Specifically, a multi-agent reinforcement learning mission supervisor (MARLMS) is designed for a group of nonlinear second-order systems to assign the behavior priorities at the decision layer. Through modeling behavior priority switching as a cooperative Markov game, the MARLMS learns an optimal joint behavior priority to reduce dependence on human intelligence and high-performance computing hardware. At the control layer, a group of second-order reinforcement learning controllers are designed to learn the optimal control policies to track position and velocity signals simultaneously. In particular, input saturation constraints are strictly implemented via designing a group of adaptive compensators. Numerical simulation results show that the proposed MARLBC has a lower switching frequency and control cost than finite-time and fixed-time behavioral control and RLBC methods.
Chapter
In this chapter, a novel self-evolving data cloud-based PID-like controller (SEDCPID) is proposed for uncertain nonlinear systems. The proposed SEDCPID controller is constructed by using the EFS based on data clouds whose rules are comprised of the non-parametric data cloud-based antecedence and PID-like consequence. The antecedent data clouds adopt the relative data density to represent the fuzzy firing strength of input variables instead of the explicit design of the membership functions in the classical sense. The proposed SEDCPID controller has the advantages of evolving structure and adapting parameters concurrently in an online manner. The density and distance information of data clouds are proposed to achieve the addition and deletion of data clouds and also a stable recursive method is proposed to update the parameters of the PID-like sub-controllers for the fast convergence performance. Based on the Lyapunov stability theory, the stability of the proposed controller is proven and the proof shows the tracking errors converge to a small neighborhood. Numerical and experimental results illustrate the effectiveness of the proposed controller in handling the uncertain nonlinear dynamic systems.
Book
How do we integrate artificial and human intelligence to use artificial intelligence to enhance our productivity, safety, and creativity? This book proposes that integrating AI into human teams will provide greater advantage than attempting to apply AI to replace human cognitive processing. Additionally, it provides methods for designing effective systems that include one or more humans and one or more AI entities to change the structure of human work. Integrating Artificial and Human Intelligence through Agent Oriented Systems Design explores why teamwork is necessary today for complex work environments. The book explains the processes and methods humans employ to effectively team with one another and presents the elements of artificial agents that permit them to function as team members in joint human and artificial agent teams. It discusses design goals and illustrates how methods to model the complex interactions among human and artificial agents can be expanded to enable interaction design to attain shared goals. Model-Based Systems Engineering (MBSE) tools that provide logical designs of human–agent teams, the AI within these teams, training to be deployed for human and artificial agent team members, and the interfaces between human and artificial agent team members are all covered. MBSE files containing profiles and examples for building MBSE models used in the design approach are featured on the author’s website (https://lodesterresci.com/hat). This book is intended for students, professors, engineers, and project managers associated with designing and developing AI systems or systems that seek to incorporate AI.
Article
Full-text available
A navigation system is described for a mobile robot equipped with a rotating ultrasonic range sensor. This navigation system is based on a dynamically maintained model of the local environment, called the composite local model. The composite local model integrates information from the rotating range sensor, the robot's touch sensor, and a pre-learned global model as the robot moves through its environment. Techniques are described for constructing a line segment description of the most recent sensor scan (the sensor model), and for integrating such descriptions to build up a model of the immediate environment (the composite local model). The estimated position of the robot is corrected by the difference in position between observed sensor signals and the corresponding symbols in the composite local model. A learning technique is described in which the robot develops a global model and a network of places. The network of places is used in global path planning, while the segments are recalled from the global model to assist in local path execution. This system is useful for navigation in a finite, pre-learned domain such as a house, office, or factory.
Conference Paper
Full-text available
Abstract This paper describes a distributed software control structure developed for the CMU Rover, an advanced mobile robot equipped with a variety of sensors. Expert modules are used to control the operation of the sensors and actuators, interpret sensory and feedback data, build an internal model of the robot's environment, devise strategies to accomplish proposed tasks and execute these strategies. Each expert module is composed of a master process and a slave process, where the master process controls the scheduling and working of the slave process. Communication among expert modules occurs asynchronously over a blackboard structure. Information specific to the execution of a given task is provided through a control plan. The system is distributed over a network of processors. Realtime operating system kernels local to each processor and an interprocess message,communication mechanism,ensure transparency of the underlying network structure. The various parts of the system are presented in this paper and future work to be performed is mentioned
Article
Research on mobile robots began in the late sixties with the Stanford Research Institute’s pioneering work. Two versions of SHAKEY, an autonomous mobile robot, were built in 1968 and 1971. The main purpose of this project was “to study processes for the realtime control of a robot system that interacts with a complex environment” 〈NIL 69〉. Indeed, mobile robots were and still are a very convenient and powerful support for research on artificial intelligence oriented robotics. They possess the capacity to provide a variety of problems at different levels of generality and difficulty in a large domain including perception, decision making, communication, etc., which all have to be considered within the scope of the specific constraints of robotics: on-line computing, cost considerations, operating ability, and reliability.
Article
Computational models of the human stereo system can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977 Marr and Poggio proposed one such computational model, which was characterized as matching certain feature points in difference-of-Gaussian filtered images and using the information obtained by matching coarser resolution representations to restrict the search space for matching finer resolution representations. An implementation of the algorithm and its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this paper, we present a version of the Marr-Poggio-Grimson algorithm that embodies these modifications, and we illustrate its performance on a series of natural images.
Conference Paper
Many research groups have studied autonomous vehicles as a challenge in Artificial Intelligence research [1–3]. These robots plan routs to the specified destination and navigate in the real world using visual or ultrasonic sensors. Most groups assume the environments are indoor but unknown, and their efforts have been concentrated on the problem of building the world model from sensory data with little prior knowledge on the environment.
Conference Paper
between pu's has to be supported. Real time intelligent robots usually consist of more than one processing unit (pu) to ensure parallel operation of several functions. Each pu in a robot executes repetitive monitoring and controlling operations as well as information ex­ change to and from other pu's. Since timing of each operation is independent of others,if the robot operating software supports concurrent process facilities, it would be helpful in robot programming. A self-contained robot "Yamabico 9" has been constructed to be a tool for investigating how a mobile robot understands the outer world. In order to support software production on the robot, Robot Control System (RCS) has been implemented, including simple job commands and a supervisor call (SVC) system. The concurrent process monitor is a part of RCS and some of SVC's are for these facilities. The monitor adopts a "message send­ ing" method to synchronize execution of two pro­ cesses and to exchange information between pro­ cesses and pu's. An example of a concurrent pro­ cess program, "walk along left walls", is given to demonstrate the describing power of our system.
Conference Paper
Mobile robots sense their environment and receive error laden readings. They try to move a certain distance and direction, and do so only approximately. Rather than try to engineer these problems away it may be possible, and may be necessary, to develop map making and navigation algorithms which explicitly represent these uncertainties, but still provide robust performance. The key idea is to use a relational map, which is rubbery and stretchy, rather than try to place observations in a 2-d coordinate system.
Article
The Stanford Cart was a remotely controlled TV-equipped mobile robot. A computer program was written which drove the Cart through cluttered spaces, gaining its knowledge of the world entirely from images broadcast by an on-board TV system. The CMU Rover is a more capable, and neatly operational, robot being built to develop and extend the Stanford work and to explore new directions. The Cart used several kinds of stereopsis to locate objects around it in three dimensions and to deduce its own motion. It planned an obstacle-avoiding path to a desired destination on the basis of a model built with this information. The plan changed as the Cart perceived new obstacles on its journey. The system was reliable for short runs, but slow. The Cart moved 1 m every 10 to 15 min, in lurches. After rolling a meter it stopped, took some pictures, and thought about them for a long time. Then it planned a new path, executed a little of it, and paused again. It successfully drove the Cart through several 20-m courses (each taking about 5 h) complex enough to necessitate three or four avoiding swerves; it failed in other trials in revealing ways. The Rover system has been designed with maximum mechanical and control system flexibility to support a wide range of research in perception and control. It features an omnidirectional steering system, a dozen on-board processors for essential real-time tasks, and a large remote computer to be helped by a high-speed digitizing/data playback unit and a high-performance array processor. Distributed high-level control software similar in organization to the Hearsay II speech-understanding system and the beginnings of a vision library are being readied. By analogy with the evolution of natural intelligence, we believe that incrementally solving the control and perception problems of an autonomous mobile mechanism is one of the best ways of arriving at general artificial intelligence.
An integrated navigation and motion control system for autonomous multisensory mobile robots Computational experiments with a feature based stereo algorithm Concurrent programming of intelligent robots
  • A M I T Flynn
  • Cambridge
  • G Ma
  • R Giralt
  • M Chatila
  • Vaisset
A. Flynn, " Redundant sensors for mobile robot navigation, " M.S. Thesis, Department of Electrical Engineering and Computer Science, M.I.T., Cambridge, MA, July 1985. G. Giralt, R. Chatila, and M. Vaisset, " An integrated navigation and motion control system for autonomous multisensory mobile robots, " in Robotics Research 1, Brady and Paul, Eds. Cambridge, MA: M.I.T. W. L. Grimson, " Computational experiments with a feature based stereo algorithm, " ZEEE Trans. Patt. Anal. Mach. Intell., vol. PAMI-7, pp. 17-34, Jan. '1985. Y. Kanayama, " Concurrent programming of intelligent robots, " in Proc. ZJCAI, 1983, pp. 834-838.
The stanford cart and the CMU rover Shakey the robot Monitoring of a building environment by a mobile robot
  • H P Moravec Zeee
  • N J Nilsson
H. P. Moravec, " The stanford cart and the CMU rover, " Proc. ZEEE, N. J. Nilsson, " Shakey the robot, " SRI AI Center, tech. note 323, Apr. 1984. H. A. Simon, Sciences of the Artificial. Cambridge, MA: M.I.T., 1969. S. Tsuji, " Monitoring of a building environment by a mobile robot, " in Robotics Research 2, Hanafusa and Inoue, Eds., Cambridge, MA: M.I.T., 1985, pp. 349-356.
Robotics and Automat
  • Conf
Conf. Robotics and Automat., pp. 824-829.
Dynamic control of manipulators in operational space Sixth IFTQMM Cong. Theory of Machines and Mechanisms
  • Khatib
Khatib, " Dynamic control of manipulators in operational space, " Sixth IFTQMM Cong. Theory of Machines and Mechanisms, Dec. 1983. M. L. Kreithen, " Orientational strategies in birds: a tribute to W. T.
The Cost of Survival in Vertebrates
  • Keeton
Keeton, " in Behavioral Energetics: The Cost of Survival in Vertebrates. Columbus, OH: Ohio State University, 1983, pp. 3-28.
Orientational strategies in birds: a tribute to W. T. Keeton" in Behavioral Energetics: The Cost of Survival in Vertebrates
  • M L Kreithenin
Monitoring of a building environment by a mobile robot
  • S Tsuji
  • Inouein Hanafusa
  • flynn