Giorgio Metta

University of Plymouth, Plymouth, England, United Kingdom

Are you Giorgio Metta?

Claim your profile

Publications (234)147.51 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes the design and realization of novel tactile sensors based on soft materials and magnetic sensing. In particular, the goal was to realize: 1) soft; 2) robust; 3) small; and 4) low-cost sensors that can be easily fabricated and integrated on robotic devices that interact with the environment. We targeted a number of desired features, the most important being: 1) high sensitivity; 2) low hysteresis; and 3) repeatability. The sensor consists of a silicone body in which a small magnet is immersed; an Hall-effect sensor placed below the silicone body measures the magnetic field generated by the magnet, which changes when the magnet is displaced due to an applied external pressure. Two different versions of the sensor have been manufactured, characterized, and mounted on an anthropomorphic robotic hand. Experiments, in which the hand interacts with real-world objects, are reported.
    IEEE Sensors Journal 08/2015; 15(8):1-1. DOI:10.1109/JSEN.2015.2417759 · 1.85 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present an ultraflexible tactile sensor, in a piezo-eletricoxide-semiconductor FET configuration, composed by a poly[vinylidenefluoride-co-trifluoroethylene] capacitor with an embedded readout circuitry, based on nMOS polysilicon electronics, integrated directly on polyimide. The ultraflexible device is designed according to an extended gate configuration. The sensor exhibits enhanced piezoelectric properties, thanks to the optimization of the poling procedure (with electric field up to 3 MV/cm), reaching a final piezoelectric coefficient of 47 pC/N. The device has been electromechanically tested by applying perpendicular forces with a minishaker. The tactile sensor, biased in a common-source arrangement, shows a linear response to increasing sinusoidal stimuli (up to 2 N) and increasing operating frequencies (up to 1200 Hz), obtaining a response up to 430 mV/N at 200 Hz for the sensor with the highest value of . The sensor performances were also tested after several cycles of controlled bending in different amount of humidity with the intent to investigate the device behavior in real conditions.
    IEEE Sensors Journal 07/2015; 15(7):1-1. DOI:10.1109/JSEN.2015.2399531 · 1.85 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hybrid Deep Neural Network - Hidden Markov Model (DNN-HMM) systems have become the state-of-the-art in Automatic Speech Recognition. In this paper we experiment with DNN-HMM phone recognition systems that use measured articulatory information. Deep Neural Networks are both used to compute phone posterior probabilities and to perform Acoustic-to-Articulatory Mapping (AAM). The AAM processes we propose are based on deep representations of the acoustic and the articulatory domains. Such representations allow to: (i) create different pre-training configurations of the DNNs that perform AAM; (ii) perform AAM on a transformed (through DNN Autoencoders) articulatory feature (AF) space that captures strong statistical dependencies between articulators. Traditionally, neural networks that approximate the AAM are used to generate AFs that are appended to the observation vector of the speech recognition system. Here we also study a novel approach (AAM-based pretraining) where a DNN performing the AAM is instead used to pretrain the DNN that computes the phone posteriors. Evaluations on both the MOCHA-TIMIT msak0 and the mngu0 datasets show that: (i) the recovered AFs reduce Phone Error Rate (PER) in both clean and noisy speech conditions, with a maximum 10.1% relative phone error reduction in clean speech conditions obtained when Autoencoder-transformed AFs are used; (ii) AAM-based pretraining could be a viable strategy to exploit the available small articulatory datasets to improve acoustic models trained on large acoustic-only datasets.
    Computer Speech & Language 06/2015; DOI:10.1016/j.csl.2015.05.005 · 1.81 Impact Factor
  • Carlos Balaguer, Tamim Asfour, Giorgio Metta
    International Journal of Humanoid Robotics 05/2015; DOI:10.1142/S0219843615020016 · 0.41 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Motivated by the impact of superresolution methods for imaging, we undertake a detailed and systematic analysis of localization acuity for a biomimetic fingertip and a flat region of tactile skin. We identify three key factors underlying superresolution that enable the perceptual acuity to surpass the sensor resolution: (i) the sensor is constructed with multiple overlapping, broad but sensitive receptive fields; (ii) the tactile perception method interpolates between receptors (taxels) to attain sub-taxel acuity; (iii) active perception ensures robustness to unknown initial contact location. All factors follow from active Bayesian perception applied to biomimetic tactile sensors with an elastomeric covering that spreads the contact over multiple taxels. In consequence, we attain extreme superresolution with a thirty-fold improvement of localization acuity (0.12mm) over sensor resolution (4mm). We envisage that these principles will enable cheap, high-acuity tactile sensors that are highly customizable to suit their robotic use. Practical applications encompass any scenario where an end-effector must be placed accurately via the sense of touch.
    IEEE Transactions on Robotics 04/2015; 31(3). DOI:10.1109/TRO.2015.2414135 · 2.65 Impact Factor
  • Source
    Ali Paikan, Giorgio Metta, Lorenzo Natale
    [Show abstract] [Hide abstract]
    ABSTRACT: Developing applications considering reactiveness, scalability and re-usability has always been at the center of attention of robotic researchers. Behavior-based architectures have been proposed as a programming paradigm to develop robust and complex behaviors as integration of simpler modules whose activities are directly modulated by sensory feedback or input from other models. The design of behavior based systems, however, becomes increasingly difficult as the complexity of the application grows. This article proposes an approach for modeling and coordinating behaviors in distributed architectures based on port arbitration which clearly separates representation of the behaviors from the composition of the software components. Therefore, based on different behavioral descriptions, the same software components can be reused to implement different applications.
  • Giorgio Metta, Lorenzo Natale, Mark Lee
    IEEE Transactions on Autonomous Mental Development 12/2014; 6(4):243-243. DOI:10.1109/TAMD.2014.2377335 · 1.35 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work we propose a comprehensive framework for the gaze stabilization of humanoid robots that is capable of compensating for the motion induced in the camera images due to the auto-generated movements of the robot so as to the external disturbances acting on its body. We first provide an extensive mathematical formulation to derive the forward and the differential kinematics of the fixation point, given the mechanism actuating the coupled eyes, and then we employ two separate signals for stabilization purpose: (1) the anticipatory term obtained from the velocity commands sent to the joints while the robot is moving autonomously; (2) the feedback term represented by the data acquired from the on board gyroscope that serve to react against unpredicted disturbances. We finally test our method on the iCub robot showing how the residual optical flow measured from the sequence of camera images is kept significantly low while the robot moves following the planned trajectory and/or varies its posture upon external contacts.
    2014 IEEE-RAS International Conference on Humanoid Robots; 11/2014
  • Robotics and Autonomous Systems 11/2014; 63. DOI:10.1016/j.robot.2014.11.002 · 1.11 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Systematically developing high--quality reusable software components is a difficult task and requires careful design to find a proper balance between potential reuse, functionalities and ease of implementation. Extendibility is an important property for software which helps to reduce cost of development and significantly boosts its reusability. This work introduces an approach to enhance components reusability by extending their functionalities using plug-ins at the level of the connection points (ports). Application--dependent functionalities such as data monitoring and arbitration can be implemented using a conventional scripting language and plugged into the ports of components. The main advantage of our approach is that it avoids to introduce application--dependent modifications to existing components, thus reducing development time and fostering the development of simpler and therefore more reusable components. Another advantage of our approach is that it reduces communication and deployment overheads as extra functionalities can be added without introducing additional modules.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Legged robots are typically in rigid contact with the environment at multiple locations, which add a degree of complexity to their control. We present a method to control the motion and a subset of the contact forces of a floating-base robot. We derive a new formulation of the lexicographic optimization problem typically arising in multitask motion/force control frameworks. The structure of the constraints of the problem (i.e. the dynamics of the robot) allows us to find a sparse analytical solution. This leads to an equivalent optimization with reduced computational complexity, comparable to inverse-dynamics based approaches. At the same time, our method preserves the flexibility of optimization based control frameworks. Simulations were carried out to achieve different multi-contact behaviors on a 23-degree-offreedom humanoid robot, validating the presented approach. A comparison with another state-of-the-art control technique with similar computational complexity shows the benefits of our controller, which can eliminate force/torque discontinuities.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a new technique to control highly redundant mechanical systems, such as humanoid robots. We take inspiration from two approaches. Prioritized control is a widespread multi-task technique in robotics and animation: tasks have strict priorities and they are satisfied only as long as they do not conflict with any higher-priority task. Optimal control instead formulates an optimization problem whose solution is either a feedback control policy or a feedforward trajectory of control inputs. We introduce strict priorities in multi-task optimal control problems, as an alternative to weighting task errors proportionally to their importance. This ensures the respect of the specified priorities, while avoiding numerical conditioning issues. We compared our approach with both prioritized control and optimal control with tests on a simulated robot with 11 degrees of freedom.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new framework for prioritized multi-task motion/force control of fully-actuated robots. This work is established on a careful review and comparison of the state of the art. Some control frameworks are not optimal, that is they do not find the optimal solution for the secondary tasks. Other frameworks are optimal, but they tackle the control problem at kinematic level, hence they neglect the robot dynamics and they do not allow for force control. Still other frameworks are optimal and consider force control, but they are computationally less efficient than ours. Our final claim is that, for fully-actuated robots, computing the operational-space inverse dynamics is equivalent to computing the inverse kinematics (at acceleration level) and then the joint-space inverse dynamics. Thanks to this fact, our control framework can efficiently compute the optimal solution by decoupling kinematics and dynamics of the robot. We take into account: motion and force control, soft and rigid contacts, free and constrained robots. Tests in simulation validate our control framework, comparing it with other state-of-the-art equivalent frameworks and showing remarkable improvements in optimality and efficiency.
    Robotics and Autonomous Systems 10/2014; DOI:10.1016/j.robot.2014.08.016 · 1.11 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.
    PLoS ONE 08/2014; 9(8):e106172. DOI:10.1371/journal.pone.0106172 · 3.53 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.
    IEEE transactions on neural networks and learning systems 07/2014; 26(5). DOI:10.1109/TNNLS.2014.2333092 · 4.37 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Human expertise in face perception grows over development, but even within minutes of birth, infants exhibit an extraordinary sensitivity to face-like stimuli. The dominant theory accounts for innate face detection by proposing that the neonate brain contains an innate face detection device, dubbed 'Conspec'. Newborn face preference has been promoted as some of the strongest evidence for innate knowledge, and forms a canonical stage for the modern form of the nature-nurture debate in psychology. Interpretation of newborn face preference results has concentrated on monocular stimulus properties, with little mention or focused investigation of potential binocular involvement. However, the question of whether and how newborns integrate the binocular visual streams bears directly on the generation of observable visual preferences. In this theoretical paper, we employ a synthetic approach utilizing robotic and computational models to draw together the threads of binocular integration and face preference in newborns, and demonstrate cases where the former may explain the latter. We suggest that a system-level view considering the binocular embodiment of newborn vision may offer a mutually satisfying resolution to some long-running arguments in the polarizing debate surrounding the existence and causal structure of newborns' 'innate knowledge' of faces.
    Developmental Science 06/2014; 17(6). DOI:10.1111/desc.12159 · 3.89 Impact Factor
  • Developmental Science 06/2014; 17(6). DOI:10.1111/desc.12199 · 3.89 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Action perception and recognition are core abilities fundamental for human social interaction. A parieto-frontal network (the mirror neuron system) matches visually presented biological motion information onto observers' motor representations. This process of matching the actions of others onto our own sensorimotor repertoire is thought to be important for action recognition, providing a non-mediated "motor perception" based on a bidirectional flow of information along the mirror parieto-frontal circuits. State-of-the-art machine learning strategies for hand action identification have shown better performances when sensorimotor data, as opposed to visual information only, are available during learning. As speech is a particular type of action (with acoustic targets), it is expected to activate a mirror neuron mechanism. Indeed, in speech perception, motor centers have been shown to be causally involved in the discrimination of speech sounds. In this paper, we review recent neurophysiological and machine learning-based studies showing (a) the specific contribution of the motor system to speech perception and (b) that automatic phone recognition is significantly improved when motor data are used during training of classifiers (as opposed to learning from purely auditory data).
    Topics in Cognitive Science 06/2014; 6(3):461-475. DOI:10.1111/tops.12095 · 2.88 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.
    Topics in Cognitive Science 06/2014; DOI:10.1111/tops.12099 · 2.88 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Calibration continues to receive significant atten-tion in robotics because of its key impact on performance and cost associated with the operation of complex robots. Calibration of kinematic parameters is typically the first mandatory step. To this end, a variety of metrology systems and corresponding algorithms have been described in the literature relying on measurements of the pose of the end-effector using a camera or laser tracking system, or, exploiting constraints arising from contacts of the end-effector with the environment. In this work, we take inspiration from the behavior of infants and certain animals, who are believed to use self-stimulation or self-touch to "calibrate" their body representations, and present a new solution to this problem by letting the robot close the kinematic chain by touching its own body. The robot considered in this paper is sensorized with tactile arrays for a total of about 4200 sensing points. The correspondence between the predicted contact point from existing forward kinematics and the actual position on the robot's 'skin' provides sample data that allows refining the kinematic representation (DH param-eters). The data collection procedure is automated—self-touch is autonomously executed by the robot—and can be repeated at any time, providing a compact self-calibration system that does not require an external measurement apparatus.
    Proc. IEEE Int. Conf. Robotics and Automation (ICRA), Hong Kong, China; 06/2014

Publication Stats

3k Citations
147.51 Total Impact Points


  • 2011–2014
    • University of Plymouth
      • • Centre for Robotics and Neural Systems (CRNS)
      • • Adaptive Behaviour and Cognition Laboratory
      Plymouth, England, United Kingdom
  • 2007–2014
    • Istituto Italiano di Tecnologia
      • • Department of Robotics, Brain and Cognitive Sciences
      • • iCub Facility
      Genova, Liguria, Italy
    • Università degli Studi di Trento
      Trient, Trentino-Alto Adige, Italy
    • University of Sharjah
      Ash Shāriqah, Ash Shāriqah, United Arab Emirates
  • 1996–2014
    • Università degli Studi di Genova
      • • Dipartimento di Matematica (DIMA)
      • • Dipartimento di Medicina sperimentale (DIMES)
      Genova, Liguria, Italy
  • 2012
    • University of Ferrara
      Ferrare, Emilia-Romagna, Italy
  • 2009
    • Delft University of Technology
      Delft, South Holland, Netherlands
  • 2002–2003
    • Massachusetts Institute of Technology
      Cambridge, Massachusetts, United States