Fig 8 - uploaded by Cecilia Laschi
Content may be subject to copyright.
Source publication
We present a sensory-motor coordination scheme for a robot hand-arm-head system that provides the robot with the capability to reach for and to grasp an object, while pre-shaping the fingers to the required grasp configuration. A model for sensory-motor coordination derived from studies in humans inspired the development of the scheme. A special fe...
Context in source publication
Context 1
... Preshaping Module receives as input the geometric features of the target object from the visual module and provides an arm position and hand configuration suitable for grasping the object (Fig. 8). The Arm Cartesian Position (ACP) encodes the wrist position (x, y, z) and orientation (roll, pitch, yaw) in the arm reference system. The Hand Joints Position (HJP) is encoded by vectors that represents the encoder values of the hand motors. This module was implemented in a type I SANFIS neural network, which contains Gaussian ...
Similar publications
Light-driven nano/micromotors are attracting much attention, not only as molecular devices but also as components of bioinspired robots. In nature, several pathogens such as Listeria use actin polymerisation machinery for their propulsion. Despite the development of various motors, it remains challenging to mimic natural systems to create artificia...
Citations
... Examples of control systems for mobile robots that predict the visual sensory data using forward models are [9,10], while [11][12][13] proposed systems that anticipate the dynamics of moving objects in order to accomplish catching tasks. Other models based on prediction have been proposed to improve performances of humanoid robots in different tasks: visual pursuit and prediction of the target motion [14][15][16][17], anticipation in reaching [18] or manipulation tasks [19][20][21]. ...
In this article, we present our initial work on sequence prediction of a visual target by implementing a cortically inspired method, namely Hierarchical Temporal Memory (HTM). As a preliminary test, we employ HTM on periodic functions to quantify prediction performance with respect to prediction steps. We then perform simulation experiments on the iCub humanoid robot simulated in the Neurorobotics Platform. We use the robot as embodied agent which enables HTM to receive sequences of visual target position from its camera in order to predict target positions in different trajectories such as horizontal, vertical and sinusoidal. The obtained results indicate that HTM based method can be customized for robotics applications that require adaptation of spatiotemporal changes in the environment and acting accordingly.
... Nevertheless, the limited literature about biological inspiration in visual-based robot grasping at the functional level suggests that the subject has still much to offer. In fact, for most biology-inspired grasping research available in the literature, the link with neuroscience is usually a general inspiration with limited impact on the final implementation (Kragic and Christensen 2003;Laschi et al. 2006). On the opposite end are works that appear as more biologically plausible, but which relation to life sciences is not clarified. ...
Mutual interest between the fields of robotics and cognitive sciences has been steadily growing in the recent years, especially through the bridging of artificial intelligence research. Nevertheless, the differences in goals and methodology and the lack of a common language make of true interdisciplinary research still a pioneering work. As the following review will expose, grasping is not an exception to this situation. A brief description of traditional and bio-inspired research in robotic vision-based grasping is presented and critically discussed in this chapter, with the purpose of defining a few important guidelines required to achieve proficuous cross-disciplinary research.
... This means that the executed actions are working as planned and no modifications to the plan are necessary. This is the main principle of EP control architectures [1], [5], [12], [13]. The main idea is to execute sensory processing and behavior planning only if the expected sensory feedback is different from the actual one. ...
... With a camera on its end effector, the robot was able to predict the next camera images based on the old images and on the arm motor commands. In [12] and [13], the EP architecture was implemented to accomplish a grasping task. The robot had the ability to grasp an object by predicting the tactile image that would be perceived after reaching for it. ...
Expected perception (EP)-based control systems use the robotic system?s internal models and interaction with the environment to predict the future response of their sensory inputs. By comparing the sensory predictions with actual sensory data, the EP control system monitors the error between the predicted and the actual sensor observations. If the error is small, the system may decide to neglect the input and skip any corrective action, thus saving computational and energy resources. If the mismatch is large, the system will further process the sensor signal to compute a corrective action through feedback. So far, EP systems have been implemented for predictions based on a robot?s motion. In this article, an EP system is applied to predict the dynamics and anticipate the motion of an external object. The new control system is implemented in a humanoid robot, the iCub. The robot reaches in anticipation for an object?s future position by predicting its trajectory and correcting the arm?s position only when necessary. The results of the EP-based controller are analyzed and compared against a standard controller. The new EP-based controller is less computationally demanding and more energy efficient for a marginal loss in the tracking error.
... Nevertheless, the limited literature about biological inspiration in visual-based robot grasping at the functional level suggests that the subject has still much to offer. In fact, for most biology-inspired grasping research available in the literature, the link with neuroscience is usually a general inspiration with limited impact on the final implementation (Kragic & Christensen, 2003;Laschi et al., 2006). On the opposite end are works that appear as more biologically plausible, but which relation to life sciences is not clarified. ...
This book presents interdisciplinary research that pursues the mutual enrichment of neuroscience and robotics. Building on experimental work, and on the wealth of literature regarding the two cortical pathways of visual processing - the dorsal and ventral streams - we define and implement, computationally and on a real robot, a functional model of the brain areas involved in vision-based grasping actions.
Grasping in robotics is largely an unsolved problem, and we show how the bio-inspired approach is successful in dealing with some fundamental issues of the task. Our robotic system can safely perform grasping actions on different unmodeled objects, denoting especially reliable visual and visuomotor skills.
The computational model and the robotic experiments help in validating theories on the mechanisms employed by the brain areas more directly involved in grasping actions. This book offers new insights and research hypotheses regarding such mechanisms, especially for what concerns the interaction between the dorsal and ventral streams. Moreover, it helps in establishing a common research framework for neuroscientists and roboticists regarding research on brain functions.
... This is the main principle of EP control Digital Object Identifier 10.1109/MRA.2015.2505958 Date of publication: 19 February 2016architectures [1], [5], [12], [13]. The main idea is to execute sensory processing and behavior planning only if the expected sensory feedback is different from the actual one. ...
... With a camera on its end effector, the robot was able to predict the next camera images based on the old images and on the arm motor commands. In [12] and [13], the EP architecture was implemented to accomplish a grasping task. The robot had the ability to grasp an object by predicting the tactile image that would be perceived after reaching for it. ...
... Using fuzzy controller, an interesting work was presented in [7]; where, a visuo-motor coordination scheme for a robot hand-arm-head system that provides the robot with the capability to reach and grasp an object, while pre-shaping the fingers to the required grasp configuration. In this controller, the visual data was used to compute the position and the orientation of the hand for grasping. ...
Abstract. We describe in this contribution a unified controller we implemented in order to simplify creating behaviors and interaction metaphors robots may have with human users. This controller is based on a multi-objective optimization technique allowing simultaneous achievement of different constraints and goals. The core of the developed algorithm is based on the particle swarm optimization technique (PSO). Through some examples, namely, a wheeled robot following persons in populated and cluttered environments and a humanoid reaching for objects, we show how the PSO is effective in solving the given problems in flexible and generic ways. We detail in this paper the algorithm and give some results obtained from simulations and field trials.
... Perception can thus contrast predicted signals with actually observed measurements. This mechanism of expected perception is believed to be used routinely by humans when addressing a variety of tasks including locomotion, grasping, manipulation, etc [1], [2], [3]. It is however important to note that the increased complexity of such robot systems, in particular the large number of degrees of freedom and sophisticated sensing, makes the calibration process quite challenging. ...
Humanoid robots are complex sensorimotor systems where the existence of internal models are of utmost importance both for control purposes and for predicting the changes in the world arising from the system’s own actions. This so-called expected perception relies on the existence of accurate internal models of the robot’s sensorimotor chains.
We assume that the kinematic model is known in advance but that the absolute offsets of the different axes cannot be directly retrieved from encoders. We propose a method to estimate such parameters, the zero position of the joints of a humanoid robotic head, by relying on proprioceptive sensors such as relative encoders, inertial sensing and visual input.
We show that our method can estimate the correct offsets of the different joints (i.e. absolute positioning) in a continuous, online manner. Not only the method is robust to noise but it can as well cope with and adjust to abrupt changes in the parameters. Experiments with three different robotic heads are presented and illustrate the performance of the methodology as well as the advantages of using such an approach.
... Background knowledge plays here a helpful role, as it reduces the computational burden of perception and motor coordination tasks in partially structured environments. An application of internal model to the prediction of tactile feedback in grasping is presented in [8]. In [8], the sensory prediction is part of a grasping action, controlled by a scheme based on Expected Perception (EP) [6]. ...
... An application of internal model to the prediction of tactile feedback in grasping is presented in [8]. In [8], the sensory prediction is part of a grasping action, controlled by a scheme based on Expected Perception (EP) [6]. Internal models in an EP scheme can be built through experience of the real world by means of neural-network-based learning mechanisms. ...
The maintenance of a stable and coherent representation of the surrounding environment is an essential capability in cognitive robotic systems. Most systems employ some form of 3D perception to create internal representations of space (maps) to support tasks such as navigation, manipulation and interaction. The creation and update of such representations may represent a significant effort in the overall computation performed by the robot. In this paper we propose an architecture based on the concept of Expected Perception that allows lightweight map updates whenever the course of action happens according to the robot's expectations. It is only when the robot's predictions and the real world outcomes differ, that corrections must be done at its full extent. We performed experiments and show results in a real robotic platform with stereo (3D) perception where map corrections are proposed by simple image level (2D) comparisons.
... Human movements have been suggested to be optimal in terms of metrics like jerk [Flash and Hogan 1985], torque change [Uno et al. 1989], variance [Harris and Wolpert 1998], effort and error [Todorov and Jordan 2002] or energy of motor neurons [Guigon et al. 2007]. This is often reflected in the mathematical models that aim to model and reproduce these movements (see [Todorov and Jordan 2004] for an overview and [Laschi et al. 2006] for one such application in robotics). Optimal control is the problem of finding a control law for a system such that some optimality criterion is satisfied. ...
La robotique humanoïde arrive a maturité avec des robots plus rapides et plus précis. Pour faire face à la complexité mécanique, la recherche a commencé à regarder au-delà du cadre habituel de la robotique, vers les sciences de la vie, afin de mieux organiser le contrôle du mouvement. Cette thèse explore le lien entre mouvement humain et le contrôle des systèmes anthropomorphes tels que les robots humanoïdes. Tout d’abord, en utilisant des méthodes classiques de la robotique, telles que l’optimisation, nous étudions les principes qui sont à la base de mouvements répétitifs humains, tels que ceux effectués lorsqu’on joue au yoyo. Nous nous concentrons ensuite sur la locomotion en nous inspirant de résultats en neurosciences qui mettent en évidence le rôle de la tête dans la marche humaine. En développant une interface permettant à un utilisateur de commander la tête du robot, nous proposons une méthode de contrôle du mouvement corps-complet d’un robot humanoïde, incluant la production de pas et permettant au corps de suivre le mouvement de la tête. Cette idée est poursuivie dans l’étude finale dans laquelle nous analysons la locomotion de sujets humains, dirigée vers une cible, afin d’extraire des caractéristiques du mouvement sous forme invariants. En faisant le lien entre la notion “d’invariant” en neurosciences et celle de “tâche cinématique” en robotique humanoïde, nous développons une méthode pour produire une locomotion réaliste pour d’autres systèmes anthropomorphes. Dans ce cas, les résultats sont illustrés sur le robot humanoïde HRP2 du LAAS-CNRS. La contribution générale de cette thèse est de montrer que, bien que la planification de mouvement pour les robots humanoïdes peut être traitée par des méthodes classiques de robotique, la production de mouvements réalistes nécessite de combiner ces méthodes à l’observation systématique et formelle du comportement humain. ABSTRACT : Humanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior.
The aim of blended cognition is to contribute to the design of more realistic and efficient robots by looking at the way humans can combine several kinds of affective, cognitive, sensorimotor and perceptual representations. This chapter is about vision-for-action. In humans and non-human primates (as well as in most of mammals), motor behavior in general and visuomotor representations for grasping in particular are influenced by emotions and affective perception of the salient properties of the environment. This aspect of motor interaction is not examined in depth in the biologically plausible robot models of grasping that are currently available. The aim of this chapter is to propose a model that can help us to make neurorobotics solutions more embodied, by integrating empirical evidence from affective neuroscience with neural evidence from vision and motor neuroscience. Our integration constitutes an attempt to make a neurorobotic model of vision and grasping more compatible with the insights proposed by the embodied view of cognition and perception followed in neuroscience, which seems to be the only one able to take into account the biological complexity of cognitive systems and, accordingly, to duly explain the high flexibility and adaptability of cognitive systems with respect to the environment they inhabit.