[Show abstract][Hide abstract] ABSTRACT: With robots leaving factory environments and entering less controlled domains, possibly sharing living space with humans, safety needs to be guaranteed. To this end, some form of awareness of their body surface and the space surrounding it is desirable. In this work, we present a unique method that lets a robot learn a distributed representation of space around its body (or peripersonal space) by exploiting a whole-body artificial skin and through physical contact with the environment. Every taxel (tactile element) has a visual receptive field anchored to it. Starting from an initially blank state, the distance of every object entering this receptive field is visually perceived and recorded, together with information whether the object has eventually contacted the particular skin area or not. This gives rise to a set of probabilities that are updated incrementally and that carry information about the likelihood of particular events in the environment contacting a particular set of taxels. The learned representation naturally serves the purpose of predicting contacts with the whole body of the robot, which is of clear behavioral relevance. Furthermore, we devised a simple avoidance controller that is triggered by this representation, thus endowing a robot with a " margin of safety " around its body. Finally, simply reversing the sign in the controller we used gives rise to simple " reaching " for objects in the robot's vicinity, which automatically proceeds with the most activated (closest) body part.
IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS2015, Hamburg, Germany; 09/2015
[Show abstract][Hide abstract] ABSTRACT: This paper describes the design and realization of novel tactile sensors based on soft materials and magnetic sensing. In particular, the goal was to realize: 1) soft; 2) robust; 3) small; and 4) low-cost sensors that can be easily fabricated and integrated on robotic devices that interact with the environment. We targeted a number of desired features, the most important being: 1) high sensitivity; 2) low hysteresis; and 3) repeatability. The sensor consists of a silicone body in which a small magnet is immersed; an Hall-effect sensor placed below the silicone body measures the magnetic field generated by the magnet, which changes when the magnet is displaced due to an applied external pressure. Two different versions of the sensor have been manufactured, characterized, and mounted on an anthropomorphic robotic hand. Experiments, in which the hand interacts with real-world objects, are reported.
[Show abstract][Hide abstract] ABSTRACT: In this paper, we present an ultraflexible tactile sensor, in a piezo-eletricoxide-semiconductor FET configuration, composed by a poly[vinylidenefluoride-co-trifluoroethylene] capacitor with an embedded readout circuitry, based on nMOS polysilicon electronics, integrated directly on polyimide. The ultraflexible device is designed according to an extended gate configuration. The sensor exhibits enhanced piezoelectric properties, thanks to the optimization of the poling procedure (with electric field up to 3 MV/cm), reaching a final piezoelectric coefficient of 47 pC/N. The device has been electromechanically tested by applying perpendicular forces with a minishaker. The tactile sensor, biased in a common-source arrangement, shows a linear response to increasing sinusoidal stimuli (up to 2 N) and increasing operating frequencies (up to 1200 Hz), obtaining a response up to 430 mV/N at 200 Hz for the sensor with the highest value of . The sensor performances were also tested after several cycles of controlled bending in different amount of humidity with the intent to investigate the device behavior in real conditions.
[Show abstract][Hide abstract] ABSTRACT: Hybrid Deep Neural Network - Hidden Markov Model (DNN-HMM) systems have become the state-of-the-art in Automatic Speech Recognition. In this paper we experiment with DNN-HMM phone recognition systems that use measured articulatory information. Deep Neural Networks are both used to compute phone posterior probabilities and to perform Acoustic-to-Articulatory Mapping (AAM). The AAM processes we propose are based on deep representations of the acoustic and the articulatory domains. Such representations allow to: (i) create different pre-training configurations of the DNNs that perform AAM; (ii) perform AAM on a transformed (through DNN Autoencoders) articulatory feature (AF) space that captures strong statistical dependencies between articulators. Traditionally, neural networks that approximate the AAM are used to generate AFs that are appended to the observation vector of the speech recognition system. Here we also study a novel approach (AAM-based pretraining) where a DNN performing the AAM is instead used to pretrain the DNN that computes the phone posteriors. Evaluations on both the MOCHA-TIMIT msak0 and the mngu0 datasets show that: (i) the recovered AFs reduce Phone Error Rate (PER) in both clean and noisy speech conditions, with a maximum 10.1% relative phone error reduction in clean speech conditions obtained when Autoencoder-transformed AFs are used; (ii) AAM-based pretraining could be a viable strategy to exploit the available small articulatory datasets to improve acoustic models trained on large acoustic-only datasets.
Computer Speech & Language 06/2015; DOI:10.1016/j.csl.2015.05.005 · 1.75 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The ability to learn about and efficiently use tools constitutes a desirable property for general purpose humanoid robots, as it allows them to extend their capabilities beyond the limitations of their own body. Yet, it is a topic that has only recently been tackled from the robotics community. Most of the studies published so far make use of tool representations that allow their models to generalize the knowledge among similar tools in a very limited way. Moreover, most studies assume that the tool is always grasped in its common or canonical grasp position, thus not considering the influence of the grasp configuration in the outcome of the actions performed with them. In the current paper we present a method that tackles both issues simultaneously by using an extended set of functional features and a novel representation of the effect of the tool use. Together, they implicitly account for the grasping configuration and allow the iCub to generalize among tools based on their geometry. Moreover, learning happens in a self-supervised manner: First, the robot autonomously discovers the affordance categories of the tools by clustering the effect of their usage. These categories are subsequently used as a teaching signal to associate visually obtained functional features to the expected tool's affordance. In the experiments, we show how this technique can be effectively used to select, given a tool, the best action to achieve a desired effect.
Robotics and Automation (ICRA), 2015 IEEE International Conference on; 05/2015
[Show abstract][Hide abstract] ABSTRACT: Motivated by the impact of superresolution methods for imaging, we undertake a detailed and systematic analysis of localization acuity for a biomimetic fingertip and a flat region of tactile skin. We identify three key factors underlying superresolution that enable the perceptual acuity to surpass the sensor resolution: (i) the sensor is constructed with multiple overlapping, broad but sensitive receptive fields; (ii) the tactile perception method interpolates between receptors (taxels) to attain sub-taxel acuity; (iii) active perception ensures robustness to unknown initial contact location. All factors follow from active Bayesian perception applied to biomimetic tactile sensors with an elastomeric covering that spreads the contact over multiple taxels. In consequence, we attain extreme superresolution with a thirty-fold improvement of localization acuity (0.12mm) over sensor resolution (4mm). We envisage that these principles will enable cheap, high-acuity tactile sensors that are highly customizable to suit their robotic use. Practical applications encompass any scenario where an end-effector must be placed accurately via the sense of touch.
[Show abstract][Hide abstract] ABSTRACT: Developing applications considering reactiveness, scalability and
re-usability has always been at the center of attention of robotic researchers.
Behavior-based architectures have been proposed as a programming paradigm to
develop robust and complex behaviors as integration of simpler modules whose
activities are directly modulated by sensory feedback or input from other
models. The design of behavior based systems, however, becomes increasingly
difficult as the complexity of the application grows. This article proposes an
approach for modeling and coordinating behaviors in distributed architectures
based on port arbitration which clearly separates representation of the
behaviors from the composition of the software components. Therefore, based on
different behavioral descriptions, the same software components can be reused
to implement different applications.
[Show abstract][Hide abstract] ABSTRACT: Recent developments in human-robot interaction show how the ability to communicate with people in a natural way is of great importance for artificial agents. The implementation of facial expressions has been found to significantly increase the interaction capabilities of humanoid robots. For speech, displaying a correct articulation with sound is mandatory to avoid audiovisual illusions like the McGurk effect (leading to comprehension errors) as well as to enhance the intelligibility in noise. This work describes the design, construction and testing of an animatronic talking face developed for the iCub robot. This talking head has an articulated jaw and four independent lip movements actuated by five motors. It is covered by a specially designed elastic tissue cover whose hemlines at the lips are attached to the motors via connecting linkages.
[Show abstract][Hide abstract] ABSTRACT: In this work we propose a comprehensive framework for the
gaze stabilization of humanoid robots that is capable of
compensating for the motion induced in the camera images
due to the auto-generated movements of the robot so as to
the external disturbances acting on its body. We first
provide an extensive mathematical formulation to derive the
forward and the differential kinematics of the fixation
point, given the mechanism actuating the coupled eyes, and
then we employ two separate signals for stabilization
purpose: (1) the anticipatory term obtained from the
velocity commands sent to the joints while the robot is
moving autonomously; (2) the feedback term represented by
the data acquired from the on board gyroscope that serve to
react against unpredicted disturbances. We finally test our
method on the iCub robot showing how the residual optical
flow measured from the sequence of camera images is kept
significantly low while the robot moves following the
planned trajectory and/or varies its posture upon external
2014 IEEE-RAS International Conference on Humanoid Robots; 11/2014
[Show abstract][Hide abstract] ABSTRACT: Robots in smart factories and robot companions for elderly and people with special needs are just two targeted scenarios in a rapidly expanding application field where humans and robots cooperate and interact at the physical and social levels. Due to the inherent complexity of large-scale robot skin, it is expected that robots will be endowed with procedures and algorithms for autonomous calibration and tuning of corresponding sensorimotor processes. At the same time, robots operating in highly dynamic and unstructured environments will have to quickly adapt to unexpected situations and events, including unknown human?robot (physical) interaction patterns. Manipulation is also the target of the paper by Ho and Hirai, which introduces an advanced and bio-inspired model to characterize friction phenomena and the stick?slip behavior of sliding fingertips.
Robotics and Autonomous Systems 11/2014; 63. DOI:10.1016/j.robot.2014.11.002 · 1.26 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Systematically developing high--quality reusable software components is a
difficult task and requires careful design to find a proper balance between
potential reuse, functionalities and ease of implementation. Extendibility is
an important property for software which helps to reduce cost of development
and significantly boosts its reusability. This work introduces an approach to
enhance components reusability by extending their functionalities using
plug-ins at the level of the connection points (ports). Application--dependent
functionalities such as data monitoring and arbitration can be implemented
using a conventional scripting language and plugged into the ports of
components. The main advantage of our approach is that it avoids to introduce
application--dependent modifications to existing components, thus reducing
development time and fostering the development of simpler and therefore more
reusable components. Another advantage of our approach is that it reduces
communication and deployment overheads as extra functionalities can be added
without introducing additional modules.
[Show abstract][Hide abstract] ABSTRACT: This paper presents a new technique to control highly redundant mechanical
systems, such as humanoid robots. We take inspiration from two approaches.
Prioritized control is a widespread multi-task technique in robotics and
animation: tasks have strict priorities and they are satisfied only as long as
they do not conflict with any higher-priority task. Optimal control instead
formulates an optimization problem whose solution is either a feedback control
policy or a feedforward trajectory of control inputs. We introduce strict
priorities in multi-task optimal control problems, as an alternative to
weighting task errors proportionally to their importance. This ensures the
respect of the specified priorities, while avoiding numerical conditioning
issues. We compared our approach with both prioritized control and optimal
control with tests on a simulated robot with 11 degrees of freedom.
[Show abstract][Hide abstract] ABSTRACT: Legged robots are typically in rigid contact with the environment at multiple
locations, which add a degree of complexity to their control. We present a
method to control the motion and a subset of the contact forces of a
floating-base robot. We derive a new formulation of the lexicographic
optimization problem typically arising in multitask motion/force control
frameworks. The structure of the constraints of the problem (i.e. the dynamics
of the robot) allows us to find a sparse analytical solution. This leads to an
equivalent optimization with reduced computational complexity, comparable to
inverse-dynamics based approaches. At the same time, our method preserves the
flexibility of optimization based control frameworks. Simulations were carried
out to achieve different multi-contact behaviors on a 23-degree-offreedom
humanoid robot, validating the presented approach. A comparison with another
state-of-the-art control technique with similar computational complexity shows
the benefits of our controller, which can eliminate force/torque
[Show abstract][Hide abstract] ABSTRACT: We present a new framework for prioritized multi-task motion/force control of fully-actuated robots. This work is established on a careful review and comparison of the state of the art. Some control frameworks are not optimal, that is they do not find the optimal solution for the secondary tasks. Other frameworks are optimal, but they tackle the control problem at kinematic level, hence they neglect the robot dynamics and they do not allow for force control. Still other frameworks are optimal and consider force control, but they are computationally less efficient than ours. Our final claim is that, for fully-actuated robots, computing the operational-space inverse dynamics is equivalent to computing the inverse kinematics (at acceleration level) and then the joint-space inverse dynamics. Thanks to this fact, our control framework can efficiently compute the optimal solution by decoupling kinematics and dynamics of the robot. We take into account: motion and force control, soft and rigid contacts, free and constrained robots. Tests in simulation validate our control framework, comparing it with other state-of-the-art equivalent frameworks and showing remarkable improvements in optimality and efficiency.
Robotics and Autonomous Systems 10/2014; DOI:10.1016/j.robot.2014.08.016 · 1.26 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In this paper we describe a control framework that integrates tactile and force sensing to regulate the physical interaction of an anthropomorphic robotic arm with the external environment. In particular, we exploit tactile sensors distributed on the robot fingers and a 6-axis force/torque sensor place at the bottom of the arm, just below the shoulder. Due to their different mounting locations and sensitivity, the sensors provide different types of contact information; their integration allows to deal with both slight and hard contacts by performing different control strategies depending on the location and the intensity of the contact. We provide real-world experimental results that show how a humanoid torso equipped with moving head, eyes, arm and hand can realize visually guided reaching dealing with different types of unexpected contacts with the environment.
IEEE International Symposium on Intelligent Control (ISIC); 10/2014
[Show abstract][Hide abstract] ABSTRACT: Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.
PLoS ONE 08/2014; 9(8):e106172. DOI:10.1371/journal.pone.0106172 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.
IEEE transactions on neural networks and learning systems 07/2014; 26(5). DOI:10.1109/TNNLS.2014.2333092 · 4.29 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Human expertise in face perception grows over development, but even within minutes of birth, infants exhibit an extraordinary sensitivity to face-like stimuli. The dominant theory accounts for innate face detection by proposing that the neonate brain contains an innate face detection device, dubbed 'Conspec'. Newborn face preference has been promoted as some of the strongest evidence for innate knowledge, and forms a canonical stage for the modern form of the nature-nurture debate in psychology. Interpretation of newborn face preference results has concentrated on monocular stimulus properties, with little mention or focused investigation of potential binocular involvement. However, the question of whether and how newborns integrate the binocular visual streams bears directly on the generation of observable visual preferences. In this theoretical paper, we employ a synthetic approach utilizing robotic and computational models to draw together the threads of binocular integration and face preference in newborns, and demonstrate cases where the former may explain the latter. We suggest that a system-level view considering the binocular embodiment of newborn vision may offer a mutually satisfying resolution to some long-running arguments in the polarizing debate surrounding the existence and causal structure of newborns' 'innate knowledge' of faces.