Conference Paper

A neuro-controller for robotic manipulators based on biologically-inspired visuo-motor co-ordination neural models

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper presents a novel scheme for sensor-based control of robotics manipulators by means of artificial neural networks. The system is able to control simple reaching tasks by only fusing visual and proprioceptive sensory data, without computational kinematic modeling of the arm structure, Thanks to the generalization features typical of the neural approach, the same neurocontroller has been easily adapted and successfully validated for controlling different manipulators with different mechanical structures, i.e. number of degrees of freedom, link length and weight, etc. The proposed scheme is directly inspired to research results in the field of neuroscience, specifically on nervous structures and physiological mechanisms involved in sensory motor coordination. From a psychological point of view J. Piaget (1976) explained visuo-motor associations in his scheme of circular reaction. He observed how, by making endogenous movements and correlating the resulting arm and hand spatial locations, the brain allows an auto-association to be created between visual and proprioceptive sensing. The work presented in this paper is derived from the more recent DIRECT model proposed D. Bullock et al. (1993). Significant and original modifications of such model have been introduced by the authors to increase, at the same time, both system performance and the biological coherence. The proposed neurocontroller has been first simulated both in the 2-dimensional and the 3-dimensional case, and then implemented for experimental trials on two real robotic manipulators.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... On the other hand, biological systems exhibit remarkable adaptability; for example, the size of a growing child's arm changes but the brain can adapt and "automatically re-calibrate" its sensorimotor control processes. This is one reason why several researchers focused on biologically-based approaches to sensorimotor control (Saxon and Mukerjee, 1990;Asuni et al., 2003Asuni et al., , 2006Laschi et al., 2008;Hoffmann et al., 2017). ...
... The problem was simplified into a two-dimensional working space and was simulated with a robotic arm consisting of three degrees of freedom. Another study, described in Asuni et al. (2003Asuni et al. ( , 2006, offered a more developed system working in three-dimensional space and simulated with a DEXTER robotic arm. But the visual space was effectively two-dimensional since the target objects were located on a planar table. ...
Article
Full-text available
Newborns demonstrate innate abilities in coordinating their sensory and motor systems through reflexes. One notable characteristic is circular reactions consisting of self-generated motor actions that lead to correlated sensory and motor activities. This paper describes a model for goal-directed reaching based on circular reactions and exocentric reference-frames. The model is built using physiologically plausible visual processing modules and arm-control neural networks. The model incorporates map representations with ego- and exo-centric reference frames for sensory inputs, vector representations for motor systems, as well as local associative learning that result from arm explorations. The integration of these modules is simulated and tested in a three-dimensional spatial environment using Unity3D. The results show that, through self-generated activities, the model self-organizes to generate accurate arm movements that are tolerant with respect to various sources of noise.
... A first implementation of the proposed model was presented in [12], for the case of the control of a generic robot arm. In this work, the implementation of the model has been improved with respect to that presented in [12]. ...
... A first implementation of the proposed model was presented in [12], for the case of the control of a generic robot arm. In this work, the implementation of the model has been improved with respect to that presented in [12]. In that work, the maps described in the previous section were implemented by using Growing Neural Gas (GNG) that have been used for finding topological structures that closely reflects the structure of the input distribution. ...
Conference Paper
Full-text available
This paper presents the application of a neural approach in the control of a 7-DOF robotic head. The inverse kinematics problem is addressed, for the control of the gaze fixation point of two cameras mounted on the robotic head. The proposed approach is based on a biologically-inspired model, which replicates the human brain capability of creating associations between motor and sensory data, by learning. The model is implemented here by self organizing neural maps. During learning, the system creates relations between the motor data associated to endogenous movements performed by the robotic head and the sensory consequences of such motor actions, i.e. the final position of the gaze fixation point. The learnt relations are stored in the neural map structure and are then used, after learning, for generating motor commands aimed at reaching a given fixation point. The approach proposed here allows to solve the inverse kinematics and joint redundancy problems for the ARTS robotic head, with good accuracy and robustness. Experimental trials confirmed the system capability to control the gaze direction and fixation point and also to manage the redundancy of the robotic head in reaching the target fixation point even with additional constraints, such as a clamped joint or two symmetric joint angles (e.g. eye joints).
... Complexity and difficulty-to-generalization represent their main deficiencies. Visuo-motor coordination neural models of humans [22] and specific learning ability is another motion principle where Guglielmelli et al. represented a neurocontroller to control a redundant manipulator [23]. With respect to the nature of the learning which occurs within an action-perception cycle, the previously mentioned controller was developed to a model-free learning-based framework to control the pose of the end-effector of a redundant robotic arm [24]. ...
Article
Full-text available
Abstract: Purpose: In this paper an innovative kinematic control algorithm is proposed for redundant robotic manipulators. The algorithm takes advantage of a bio-inspired approach. Design/methodology/approach: A simplified 2 DOFs model is presented to handle kinematic redundancy in the x-y plane; an extension to three-dimensional tracking tasks is presented as well. A set of sample trajectories were used to evaluate the performances of the proposed algorithm. Findings: The results from the simulations confirm the continuity and accuracy of generated joint profiles for given end-effector trajectories as well as algorithm robustness, singularity and self-collision avoidance. Originality/value: This paper shows how to control a redundant robotic arm by applying human upper arm-inspired concept of inter-joint dependency.
... In [5] a neural model controlling a robotic arm moving on a plane uses motor babbling to train, with an error-back propagation algorithm, a forward model later used to train an inverse model capable of performing reaching actions. Other neural-network models have been proposed in the last years to further specify the detail functioning of the circularreaction hypothesis at a neural level (e.g., [6]), or to exploit associations formed with motor babbling and supervised algorithms to control complex robotic plants (e.g., [7]). ...
Conference Paper
Full-text available
An influential hypothesis of developmental psy-chology states that, in the first months of their life, infants perform exploratory/random movements ("motor babbling") in order to create associations between such movements and the resulting perceived effects. These associations are later used as building blocks to tackle more complex sensorimotor behaviours. Due to its underlying simplicity, motor babbling might be a learning strategy widely used in the early phases of child development. Various models of this process have been proposed that focus on the acquisition of reaching skills based on the synchronous association between the positions of the seen hand (or grasped object) and the proprioception of the postures that cause them. This research tries to understand, on a computational basis, if the principles underlying motor babbling can be extended to the acquisition of behaviours more complex than reaching, such as the execution of non-linear movement trajectories for avoiding obstacles or the acquisition of movements directed to grasp objects. These behaviours are challenging for motor babbling as they involve the execution of movements, or sequences of movements, in time, and so they cannot be learned on the basis of simple synchronous associations between their neural representations and perceptive neural representations. The paper aims to show that infants might still use motor babbling for the development of these behaviours by overcoming its time-limits on the basis of complementary mechanisms such as Pattern Generators and innate reflexes. The computational viability of this hypothesis is demonstrated by testing the proposed models with a 3D simulated dynamic eye-arm-hand robot working on a plane.
... Biologically-inspired approaches dramatically influence robot design: biomechatronic design tends to make biomorphic robotic platforms complex and sophisticated in sensory-motor functions, and intrinsically adaptable, flexible and evolutionary, as biological systems are. Thus, controlling this kind of systems requires solving problems related to complex sensory-motor coordination, for which still biology can be an effective source of inspiration (Guglielmelli et al. 2007). ...
Article
Full-text available
This paper presents a sensory-motor coordination scheme for a robot hand-arm-head system that provides the robot with the capability to reach an object while pre-shaping the fingers to the required grasp configuration and while predicting the tactile image that will be perceived after grasping. A model for sensory-motor coordination derived from studies in humans inspired the development of this scheme. A peculiar feature of this model is the prediction of the tactile image. The implementation of the proposed scheme is based on a neuro-fuzzy module that, after a learning phase, starting from visual data, calculates the position and orientation of the hand for reaching, selects the best-suited hand configuration, and predicts the tactile feedback. The implementation of the scheme on a humanoid robot allowed experimental validation of its effectiveness in robotics and provided perspectives on applications of sensory predictions in robot motor control.
Article
Full-text available
This paper describes the BUSCAMOS-Oil monitoring system, which is a robotic platform consisting of an autonomous surface vessel combined with an underwater vehicle. The system has been designed for the long-term monitoring of oil spills, including the search for the spill, and transmitting information on its location, extent, direction and speed. Both vehicles are controlled by two different types of bio-inspired neural networks: a Self-Organization Direction Mapping Network for trajectory generation and a Neural Network for Avoidance Behaviour for avoiding obstacles. The systems’ resilient capabilities are provided by bio-inspired algorithms implemented in a modular software architecture and controlled by redundant devices to give the necessary robustness to operate in the difficult conditions typically found in long-term oil-spill operations. The efficacy of the vehicles’ adaptive navigation system and long-term mission capabilities are shown in the experimental results.
Conference Paper
This paper presents a self-organizing neural network model for visuo-motor coordination of a redundant humanoid robot arm in reaching tasks. The proposed approach is based on a biologically-inspired model which replicates some characteristics of human control: learning occurs through an action-perception cycle and does not requires explicit knowledge of the geometry of the manipulator. The transformation learned is a mapping from spatial movement direction to joint rotation. During learning, the system creates relations between the motor data associated to endogenous movements performed by the robotic arm and the sensory consequences of such motor actions, i.e. the final position and orientation of the end effector. The learnt relations are stored in the neural map structure and are then used, after learning, for generating motor commands aimed at reaching a given point in 3D space. The work is an extension of (E. Guglielmelli, et al.) including the end-effector orientation control. Experimental trials confirmed the system capability to control the end effector position and orientation and also to manage the redundancy of the robotic manipulator in reaching the 3D target point even with additional constraints, such as one or more clamped joints without additional learning phases
Conference Paper
This paper presents a neural model for visuo-motor coordination of a redundant robotic manipulator in reaching tasks. The model was developed for, and experimentally validated on, a neurobotic platform for manipulation. The proposed approach is based on a biologically-inspired model, which replicates the human brain capability of creating associations between motor and sensory data, by learning. The model is implemented here by self-organizing neural maps. During learning, the system creates relations between the motor data associated to endogenous movements performed by the robotic arm and the sensory consequences of such motor actions, i.e. the final position of the end effector. The learnt relations are stored in the neural map structure and are then used, after learning, for generating motor commands aimed at reaching a given point in 3D space. The approach proposed here allows to solve the inverse kinematics and joint redundancy problems for different robotic arms, with good accuracy and robustness. In order to validate this, the same implementation has been tested on a PUMA robot, too. Experimental trials confirmed the system capability to control the end effector position and also to manage the redundancy of the robotic manipulator in reaching the 3D target point even with additional constraints, such as one or more clamped joints, tools of variable lengths, or no visual feedback, without additional learning phases
Article
Full-text available
We propose a biologically realistic neural network that computes coordinate transformations for the command of arm reaching movements in 3-D space. This model is consistent with anatomical and physiological data on the cortical areas involved in the command of these movements. Studies of the neuronal activity in the motor (Georgopoulos et al., 1986; Schwartz et al., 1988; Caminiti et al., 1990a) and premotor (Caminiti et al., 1990b, 1991) cortices of behaving monkeys have shown that the activity of individual arm-related neurons is broadly tuned around a preferred direction of movements in 3-D space. Recent data demonstrate that in both frontal areas (Caminiti et al., 1990a,b, 1991) these cell preferred directions rotate with the initial position of the arm. Furthermore, the rotation of the population of preferred directions precisely corresponds to the rotation of the arm in space. The neural network model computes the motor command by combining the visual information about movement trajectory with the kinesthetic information concerning the orientation of the arm in space. The appropriate combination, learned by the network from spontaneous movement, can be approximated by a bilinear operation that can be interpreted as a projection of the visual information on a reference frame that rotates with the arm. This bilinear combination implies that neural circuits converging on a single neuron in the motor and premotor cortices can learn and generalize the appropriate command in a 2-D subspace but not in the whole 3-D space. However, the uniform distribution of cell preferred directions in these frontal areas can explain the computation of the correct solution by a population of cortical neurons. The model is consistent with the existing neurophysiological data and predicts how visual and somatic information can be combined in the different processing steps of the visuomotor transformation subserving visual reaching.
Conference Paper
Full-text available
This paper presents a research work on compliant control of an anthropomorphic robot arm used as a personal robot. In personal applications of robotics, human-robot interaction represents a critical factor for a robot design and introduces strict requirements on its behavior and control, which has to ensure safety and effectiveness. In this work, the problem of controlling the Dexter anthropomorphic robot arm with variable compliance has been investigated, not only to ensure safety in the interaction with humans, but especially to increase the robot functionality in tasks of physical interaction, performed in co-operation with humans. Two different control schemes have been formulated and implemented, to compare their performance experimentally. Both schemes aim at realising a self-controlled compliant behavior without using information from force/torque sensors. The experimental comparison outlines how the performance of the two control systems are inverted with respect to the theoretical considerations, based on the classical control theory, on their accuracy and effectiveness.
Conference Paper
Full-text available
The capability of autonomously discovering relations between perceptual data and motor actions is crucial for the development of robust adaptive robotic systems intended to operate in a changing and unknown environment. In the case of robotic tactile perception, proper interaction between contact sensing and motor control is the basic step towards the execution of complex motor procedures such as grasping and manipulation. In this paper we propose an approach to the development of tactile-motor coordination in robotics, based on a neural model of the human tactile-motor system. The definition of such model is based on the features of biological systems as investigated by neuroscience. The autonomous development of tactile-motor coordination achieved through the implementation of the neural model is evaluated by experimental trials using a sensorised prosthetic hand and a robotic manipulator. The proposed neural network architecture linking changes in the sensed tactile pattern with the motor actions performed is described and experimental results are analysed and discussed
Article
This article reviews some of my main theoretical advances before 1973 in a self-contained and nontechnical exposition. Among other features, the article describes some predictions which still need to be tested.
Article
This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.
Article
We present a new self-organizing neural network model that has two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches (e.g., the Kohonen feature map) is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process that also includes occasional removal of units. The second variant of the model is a supervised learning method that results from the combination of the above-mentioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible—in contrast to earlier approaches—to perform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks that generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented that are better than any results previously published.
Article
Accumulating neuropsychological, electrophysiological and behavioural evidence suggests that the neural substrates of visual perception may be quite distinct from those underlying the visual control of actions. In other words, the set of object descriptions that permit identification and recognition may be computed independently of the set of descriptions that allow an observer to shape the hand appropriately to pick up an object. We propose that the ventral stream of projections from the striate cortex to the inferotemporal cortex plays the major role in the perceptual identification of objects, while the dorsal stream projecting from the striate cortex to the posterior parietal region mediates the required sensorimotor transformations for visually guided actions directed at such objects.
Article
Although individual neurons in the arm area of the primate motor cortex are only broadly tuned to a particular direction in three-dimensional space, the animal can very precisely control the movement of its arm. The direction of movement was found to be uniquely predicted by the action of a population of motor cortical neurons. When individual cells were represented as vectors that make weighted contributions along the axis of their preferred direction (according to changes in their activity during the movement under consideration) the resulting vector sum of all cell vectors (population vector) was in a direction congruent with the direction of movement. This population vector can be monitored during various tasks, and similar measures in other neuronal populations could be of heuristic value where there is a neural representation of variables with vectorial attributes.