[Show abstract][Hide abstract] ABSTRACT: This paper presents a novel approach for incremental semiparametric inverse
dynamics learning. In particular, we consider the mixture of two approaches:
Parametric modeling based on rigid body dynamics equations and nonparametric
modeling based on incremental kernel methods, with no prior information on the
mechanical properties of the system. This yields to an incremental
semiparametric approach, leveraging the advantages of both the parametric and
nonparametric models. We validate the proposed technique learning the dynamics
of one arm of the iCub humanoid robot.
[Show abstract][Hide abstract] ABSTRACT: With robots leaving factory environments and entering less controlled domains, possibly sharing living space with humans, safety needs to be guaranteed. To this end, some form of awareness of their body surface and the space surrounding it is desirable. In this work, we present a unique method that lets a robot learn a distributed representation of space around its body (or peripersonal space) by exploiting a whole-body artificial skin and through physical contact with the environment. Every taxel (tactile element) has a visual receptive field anchored to it. Starting from an initially blank state, the distance of every object entering this receptive field is visually perceived and recorded, together with information whether the object has eventually contacted the particular skin area or not. This gives rise to a set of probabilities that are updated incrementally and that carry information about the likelihood of particular events in the environment contacting a particular set of taxels. The learned representation naturally serves the purpose of predicting contacts with the whole body of the robot, which is of clear behavioral relevance. Furthermore, we devised a simple avoidance controller that is triggered by this representation, thus endowing a robot with a " margin of safety " around its body. Finally, simply reversing the sign in the controller we used gives rise to simple " reaching " for objects in the robot's vicinity, which automatically proceeds with the most activated (closest) body part.
[Show abstract][Hide abstract] ABSTRACT: Recent developments in human-robot interaction show how the ability to communicate with people in a natural way is of great importance for artificial agents. The implementation of facial expressions has been found to significantly increase the interaction capabilities of humanoid robots. For speech, displaying a correct articulation with sound is mandatory to avoid audiovisual illusions like the McGurk effect (leading to comprehension errors) as well as to enhance the intelligibility in noisy conditions. This work describes the design, construction and testing of an animatronic talking face developed for the iCub robot. This talking head has an articulated jaw and four independent lip movements actuated by five motors. It is covered by a specially designed elastic tissue cover whose hemlines at the lips are attached to the motors via connecting linkages. The mechanical design and the control scheme have been evaluated by speech intelligibility in noise (SPIN) perceptual tests that demonstrate an absolute 10% intelligibility gain provided by the jaw and lip movements over the audio-only display.
No preview · Article · Sep 2015 · International Journal of Humanoid Robotics
[Show abstract][Hide abstract] ABSTRACT: This paper describes the design and realization of novel tactile sensors based on soft materials and magnetic sensing. In particular, the goal was to realize: 1) soft; 2) robust; 3) small; and 4) low-cost sensors that can be easily fabricated and integrated on robotic devices that interact with the environment. We targeted a number of desired features, the most important being: 1) high sensitivity; 2) low hysteresis; and 3) repeatability. The sensor consists of a silicone body in which a small magnet is immersed; an Hall-effect sensor placed below the silicone body measures the magnetic field generated by the magnet, which changes when the magnet is displaced due to an applied external pressure. Two different versions of the sensor have been manufactured, characterized, and mounted on an anthropomorphic robotic hand. Experiments, in which the hand interacts with real-world objects, are reported.
Full-text · Article · Aug 2015 · IEEE Sensors Journal
[Show abstract][Hide abstract] ABSTRACT: In this paper, we present an ultraflexible tactile sensor, in a piezo-eletricoxide-semiconductor FET configuration, composed by a poly[vinylidenefluoride-co-trifluoroethylene] capacitor with an embedded readout circuitry, based on nMOS polysilicon electronics, integrated directly on polyimide. The ultraflexible device is designed according to an extended gate configuration. The sensor exhibits enhanced piezoelectric properties, thanks to the optimization of the poling procedure (with electric field up to 3 MV/cm), reaching a final piezoelectric coefficient of 47 pC/N. The device has been electromechanically tested by applying perpendicular forces with a minishaker. The tactile sensor, biased in a common-source arrangement, shows a linear response to increasing sinusoidal stimuli (up to 2 N) and increasing operating frequencies (up to 1200 Hz), obtaining a response up to 430 mV/N at 200 Hz for the sensor with the highest value of . The sensor performances were also tested after several cycles of controlled bending in different amount of humidity with the intent to investigate the device behavior in real conditions.
[Show abstract][Hide abstract] ABSTRACT: Hybrid Deep Neural Network - Hidden Markov Model (DNN-HMM) systems have become the state-of-the-art in Automatic Speech Recognition. In this paper we experiment with DNN-HMM phone recognition systems that use measured articulatory information. Deep Neural Networks are both used to compute phone posterior probabilities and to perform Acoustic-to-Articulatory Mapping (AAM). The AAM processes we propose are based on deep representations of the acoustic and the articulatory domains. Such representations allow to: (i) create different pre-training configurations of the DNNs that perform AAM; (ii) perform AAM on a transformed (through DNN Autoencoders) articulatory feature (AF) space that captures strong statistical dependencies between articulators. Traditionally, neural networks that approximate the AAM are used to generate AFs that are appended to the observation vector of the speech recognition system. Here we also study a novel approach (AAM-based pretraining) where a DNN performing the AAM is instead used to pretrain the DNN that computes the phone posteriors. Evaluations on both the MOCHA-TIMIT msak0 and the mngu0 datasets show that: (i) the recovered AFs reduce Phone Error Rate (PER) in both clean and noisy speech conditions, with a maximum 10.1% relative phone error reduction in clean speech conditions obtained when Autoencoder-transformed AFs are used; (ii) AAM-based pretraining could be a viable strategy to exploit the available small articulatory datasets to improve acoustic models trained on large acoustic-only datasets.
No preview · Article · Jun 2015 · Computer Speech & Language
[Show abstract][Hide abstract] ABSTRACT: The ability to learn about and efficiently use tools constitutes a desirable property for general purpose humanoid robots, as it allows them to extend their capabilities beyond the limitations of their own body. Yet, it is a topic that has only recently been tackled from the robotics community. Most of the studies published so far make use of tool representations that allow their models to generalize the knowledge among similar tools in a very limited way. Moreover, most studies assume that the tool is always grasped in its common or canonical grasp position, thus not considering the influence of the grasp configuration in the outcome of the actions performed with them. In the current paper we present a method that tackles both issues simultaneously by using an extended set of functional features and a novel representation of the effect of the tool use. Together, they implicitly account for the grasping configuration and allow the iCub to generalize among tools based on their geometry. Moreover, learning happens in a self-supervised manner: First, the robot autonomously discovers the affordance categories of the tools by clustering the effect of their usage. These categories are subsequently used as a teaching signal to associate visually obtained functional features to the expected tool's affordance. In the experiments, we show how this technique can be effectively used to select, given a tool, the best action to achieve a desired effect.
[Show abstract][Hide abstract] ABSTRACT: Motivated by the impact of superresolution methods for imaging, we undertake a detailed and systematic analysis of localization acuity for a biomimetic fingertip and a flat region of tactile skin. We identify three key factors underlying superresolution that enable the perceptual acuity to surpass the sensor resolution: (i) the sensor is constructed with multiple overlapping, broad but sensitive receptive fields; (ii) the tactile perception method interpolates between receptors (taxels) to attain sub-taxel acuity; (iii) active perception ensures robustness to unknown initial contact location. All factors follow from active Bayesian perception applied to biomimetic tactile sensors with an elastomeric covering that spreads the contact over multiple taxels. In consequence, we attain extreme superresolution with a thirty-fold improvement of localization acuity (0.12mm) over sensor resolution (4mm). We envisage that these principles will enable cheap, high-acuity tactile sensors that are highly customizable to suit their robotic use. Practical applications encompass any scenario where an end-effector must be placed accurately via the sense of touch.
[Show abstract][Hide abstract] ABSTRACT: Developing applications considering reactiveness, scalability and
re-usability has always been at the center of attention of robotic researchers.
Behavior-based architectures have been proposed as a programming paradigm to
develop robust and complex behaviors as integration of simpler modules whose
activities are directly modulated by sensory feedback or input from other
models. The design of behavior based systems, however, becomes increasingly
difficult as the complexity of the application grows. This article proposes an
approach for modeling and coordinating behaviors in distributed architectures
based on port arbitration which clearly separates representation of the
behaviors from the composition of the software components. Therefore, based on
different behavioral descriptions, the same software components can be reused
to implement different applications.
[Show abstract][Hide abstract] ABSTRACT: In this work we propose a comprehensive framework for the
gaze stabilization of humanoid robots that is capable of
compensating for the motion induced in the camera images
due to the auto-generated movements of the robot so as to
the external disturbances acting on its body. We first
provide an extensive mathematical formulation to derive the
forward and the differential kinematics of the fixation
point, given the mechanism actuating the coupled eyes, and
then we employ two separate signals for stabilization
purpose: (1) the anticipatory term obtained from the
velocity commands sent to the joints while the robot is
moving autonomously; (2) the feedback term represented by
the data acquired from the on board gyroscope that serve to
react against unpredicted disturbances. We finally test our
method on the iCub robot showing how the residual optical
flow measured from the sequence of camera images is kept
significantly low while the robot moves following the
planned trajectory and/or varies its posture upon external
[Show abstract][Hide abstract] ABSTRACT: Recent developments in human-robot interaction show how the ability to communicate with people in a natural way is of great importance for artificial agents. The implementation of facial expressions has been found to significantly increase the interaction capabilities of humanoid robots. For speech, displaying a correct articulation with sound is mandatory to avoid audiovisual illusions like the McGurk effect (leading to comprehension errors) as well as to enhance the intelligibility in noise. This work describes the design, construction and testing of an animatronic talking face developed for the iCub robot. This talking head has an articulated jaw and four independent lip movements actuated by five motors. It is covered by a specially designed elastic tissue cover whose hemlines at the lips are attached to the motors via connecting linkages.
[Show abstract][Hide abstract] ABSTRACT: Robots in smart factories and robot companions for elderly and people with special needs are just two targeted scenarios in a rapidly expanding application field where humans and robots cooperate and interact at the physical and social levels. Due to the inherent complexity of large-scale robot skin, it is expected that robots will be endowed with procedures and algorithms for autonomous calibration and tuning of corresponding sensorimotor processes. At the same time, robots operating in highly dynamic and unstructured environments will have to quickly adapt to unexpected situations and events, including unknown human?robot (physical) interaction patterns. Manipulation is also the target of the paper by Ho and Hirai, which introduces an advanced and bio-inspired model to characterize friction phenomena and the stick?slip behavior of sliding fingertips.
Full-text · Article · Nov 2014 · Robotics and Autonomous Systems
[Show abstract][Hide abstract] ABSTRACT: Systematically developing high--quality reusable software components is a
difficult task and requires careful design to find a proper balance between
potential reuse, functionalities and ease of implementation. Extendibility is
an important property for software which helps to reduce cost of development
and significantly boosts its reusability. This work introduces an approach to
enhance components reusability by extending their functionalities using
plug-ins at the level of the connection points (ports). Application--dependent
functionalities such as data monitoring and arbitration can be implemented
using a conventional scripting language and plugged into the ports of
components. The main advantage of our approach is that it avoids to introduce
application--dependent modifications to existing components, thus reducing
development time and fostering the development of simpler and therefore more
reusable components. Another advantage of our approach is that it reduces
communication and deployment overheads as extra functionalities can be added
without introducing additional modules.
[Show abstract][Hide abstract] ABSTRACT: In this paper we tackle the problem of estimating the local compliance of tactile arrays exploiting global measurements from a single force and torque sensor. The proposed procedure exploits a transformation matrix (describing the relative position between the local tactile elements and the global force/torque measurements) to define a linear regression problem on the unknown local stiffness. Experiments have been conducted on the foot of the iCub robot, sensorized with a single force/torque sensor and a tactile array of 250 tactile elements (taxels) on the foot sole. Results show that a simple calibration procedure can be employed to estimate the stiffness parameters of virtual springs over a tactile array and to use these model to predict normal forces exerted on the array based only on the tactile feedback. Leveraging on previous works the proposed procedure does not necessarily need a-priori information on the transformation matrix of the taxels which can be directly estimated from available measurements.
[Show abstract][Hide abstract] ABSTRACT: Legged robots are typically in rigid contact with the environment at multiple
locations, which add a degree of complexity to their control. We present a
method to control the motion and a subset of the contact forces of a
floating-base robot. We derive a new formulation of the lexicographic
optimization problem typically arising in multitask motion/force control
frameworks. The structure of the constraints of the problem (i.e. the dynamics
of the robot) allows us to find a sparse analytical solution. This leads to an
equivalent optimization with reduced computational complexity, comparable to
inverse-dynamics based approaches. At the same time, our method preserves the
flexibility of optimization based control frameworks. Simulations were carried
out to achieve different multi-contact behaviors on a 23-degree-offreedom
humanoid robot, validating the presented approach. A comparison with another
state-of-the-art control technique with similar computational complexity shows
the benefits of our controller, which can eliminate force/torque
[Show abstract][Hide abstract] ABSTRACT: This paper presents a new technique to control highly redundant mechanical
systems, such as humanoid robots. We take inspiration from two approaches.
Prioritized control is a widespread multi-task technique in robotics and
animation: tasks have strict priorities and they are satisfied only as long as
they do not conflict with any higher-priority task. Optimal control instead
formulates an optimization problem whose solution is either a feedback control
policy or a feedforward trajectory of control inputs. We introduce strict
priorities in multi-task optimal control problems, as an alternative to
weighting task errors proportionally to their importance. This ensures the
respect of the specified priorities, while avoiding numerical conditioning
issues. We compared our approach with both prioritized control and optimal
control with tests on a simulated robot with 11 degrees of freedom.
[Show abstract][Hide abstract] ABSTRACT: We present a new framework for prioritized multi-task motion/force control of fully-actuated robots. This work is established on a careful review and comparison of the state of the art. Some control frameworks are not optimal, that is they do not find the optimal solution for the secondary tasks. Other frameworks are optimal, but they tackle the control problem at kinematic level, hence they neglect the robot dynamics and they do not allow for force control. Still other frameworks are optimal and consider force control, but they are computationally less efficient than ours. Our final claim is that, for fully-actuated robots, computing the operational-space inverse dynamics is equivalent to computing the inverse kinematics (at acceleration level) and then the joint-space inverse dynamics. Thanks to this fact, our control framework can efficiently compute the optimal solution by decoupling kinematics and dynamics of the robot. We take into account: motion and force control, soft and rigid contacts, free and constrained robots. Tests in simulation validate our control framework, comparing it with other state-of-the-art equivalent frameworks and showing remarkable improvements in optimality and efficiency.
Full-text · Article · Oct 2014 · Robotics and Autonomous Systems
[Show abstract][Hide abstract] ABSTRACT: In this paper we describe a control framework that integrates tactile and force sensing to regulate the physical interaction of an anthropomorphic robotic arm with the external environment. In particular, we exploit tactile sensors distributed on the robot fingers and a 6-axis force/torque sensor place at the bottom of the arm, just below the shoulder. Due to their different mounting locations and sensitivity, the sensors provide different types of contact information; their integration allows to deal with both slight and hard contacts by performing different control strategies depending on the location and the intensity of the contact. We provide real-world experimental results that show how a humanoid torso equipped with moving head, eyes, arm and hand can realize visually guided reaching dealing with different types of unexpected contacts with the environment.