Preprint

Transferability-based Chain Motion Mapping from Humans to Humanoids for Teleoperation

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Although data-driven motion mapping methods are promising to allow intuitive robot control and teleoperation that generate human-like robot movement, they normally require tedious pair-wise training for each specific human and robot pair. This paper proposes a transferability-based mapping scheme to allow new robot and human input systems to leverage the mapping of existing trained pairs to form a mapping transfer chain, which will reduce the number of new pair-specific mappings that need to be generated. The first part of the mapping schematic is the development of a Synergy Mapping via Dual-Autoencoder (SyDa) method. This method uses the latent features from two autoencoders to extract the common synergy of the two agents. Secondly, a transferability metric is created that approximates how well the mapping between a pair of agents will perform compared to another pair before creating the motion mapping models. Thus, it can guide the formation of an optimal mapping chain for the new human-robot pair. Experiments with human subjects and a Pepper robot demonstrated 1) The SyDa method improves the accuracy and generalizability of the pair mappings, 2) the SyDa method allows for bidirectional mapping that does not prioritize the direction of mapping motion, and 3) the transferability metric measures how compatible two agents are for accurate teleoperation. The combination of the SyDa method and transferability metric creates generalizable and accurate mapping need to create the transfer mapping chain.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Muscle synergies can be seen as fundamental building blocks of motor control. Extracting muscle synergies from EMG data is a widely used method in motor related research. Due to the linear nature of the methods commonly used for extracting muscle synergies, those methods fail to represent agonist-antagonist muscle relationships in the extracted synergies. In this paper, we propose to use a special type of neural networks, called autoencoders, for extracting muscle synergies. Using simulated data and real EMG data, we show that autoencoders, contrary to commonly used methods, allow to capture agonist-antagonist muscle relationships, and that the autoencoder models have a significantly better fit to the data than others methods.
Conference Paper
Full-text available
We describe a system that allows for controlling different robots and avatars from a real time motion stream. The underlying problem is that motion data from tracking systems is usually represented differently to the motion data required to drive an avatar or a robot: there may be different joints, motion may be represented by absolute joint positions and rotations or by a root position, bone lengths and relative rotations in the skeletal hierarchy. Our system resolves these issues by remapping in real time the tracked motion so that the avatar or robot performs motions that are visually close to those of the tracked person. The mapping can also be reconfigured interactively at run-time. We demonstrate the effectiveness of our system by case studies in which a tracked person is embodied as an avatar in immersive virtual reality or as a robot in a remote location. We show this with a variety of tracking systems, humanoid avatars and robots.
Article
Full-text available
Purpose The purpose of this paper is to provide a review of various motion capture technologies and discuss the methods for handling the captured data in applications related to robotics. Design/methodology/approach The approach taken in the paper is to compare the features and limitations of motion trackers in common use. After introducing the technology, a summary is given of robotic‐related work undertaken with the sensors and the strengths of different approaches in handling the data are discussed. Each comparison is presented in a table. Results from the author's experimentation with an inertial motion capture system are discussed based on clustering and segmentation techniques. Findings The trend in methodology is towards stochastic machine learning techniques such as hidden Markov model or Gaussian mixture model, their extensions in hierarchical forms and non‐linear dimension reduction. The resulting empirical models tend to handle uncertainty well and are suitable for incrementally updating models. The challenges in human‐robot interaction today include expanding upon generalising motions to understand motion planning and decisions and build ultimately context aware systems. Originality/value Reviews including descriptions of motion trackers and recent methodologies used in analyzing the data they capture are not very common. Some exist, as has been pointed out in the paper, but this review concentrates more on applications in the robotics field. There is value in regularly surveying the research areas considered in this paper due to the rapid progress in sensors and especially data modeling.
Conference Paper
Full-text available
The goal of this study is developing a biped humanoid robot that can observe a human dance performance and imitate it. To achieve this goal, we propose a task model of lower body motion, which consists of task primitives (what to do) and skill parameters (how to do it). Based on this model, a sequence of task primitives and their skill parameters are detected from human motion, and robot motion is regenerated from the detected result under constraints of a robot. This model can generate human-like lower body motion including various waist motions as well as various stepping motions of the legs. Generated motions can be performed stably on an actual robot supported by its own legs. We used improved robot hardware HRP-2, which has superior features in body weight, actuators, and DOF of the waist. By using the proposed method and HRP-2, we have realized a dance performance of Japanese folk dance by the robot, which is synchronized with a performance of a human grand master on the same stage.
Conference Paper
Full-text available
One of the main aims of humanoid robotics is to develop robots that are capable of interacting naturally with people. However, to understand the essence of human interaction, it is crucial to investigate the contribution of behavior and appearance. Our group's research explores these relationships by developing androids that closely resemble human beings in both aspects. If humanlike appearance causes us to evaluate an android's behavior from a human standard, we are more likely to be cognizant of deviations from human norms. Therefore, the android's motions must closely match human performance to avoid looking strange, including such autonomic responses as the shoulder movements involved in breathing. This paper proposes a method to implement motions that look human by mapping their three-dimensional appearance from a human performer to the android and then evaluating the verisimilitude of the visible motions using a motion capture system. This approach has several advantages over current research, which has focused on copying a person's moving joint angles to a robot: (1) in an android robot with many degrees of freedom and kinematics that differs from that of a human being, it is difficult to calculate which joint angles would make the robot's posture appear similar to the human performer; and (2) the motion that we perceive is at the robot's surface, not necessarily at its joints, which are often hidden from view.
Conference Paper
Full-text available
This paper deals with the imitation of human motions by a humanoid robot based on marker point measurements from a 3D motion capture system. For imitating the humanpsilas motion, we propose a Cartesian control approach in which a set of control points on the humanoid is selected and the robot is virtually connected to the measured marker points via translational springs. The forces according to these springs drive a simplified simulation of the robot dynamics, such that the real robot motion can finally be generated based on joint position controllers effectively managing joint friction and other uncertain dynamics. This procedure allows to make the robot follow the marker points without the need of explicitly computing inverse kinematics. For the implementation of the marker control on a humanoid robot, we combine it with a center of gravity based balancing controller for the lower body joints. We integrate the marker control based motion imitation with the mimesis model, which is a mathematical model for motion learning, recognition, and generation based on hidden Markov models (HMMs). Learning, recognition, and generation of motion primitives are all performed in marker coordinates paving the way for extending these concepts to task space problems and object manipulation. Finally, an experimental evaluation of the presented concepts using a 38 degrees of freedom humanoid robot is discussed.
Conference Paper
Full-text available
Humans have at some point learned an abstraction of the capabilities of their arms. By just looking at the scene they can decide which places or objects they can easily reach and which are difficult to approach. Possessing a similar abstraction of a robot arm's capabilities in its workspace is important for grasp planners, path planners and task planners. In this paper, we show that robot arm capabilities manifest themselves as directional structures specific to workspace regions. We introduce a representation scheme that enables to visualize and inspect the directional structures. The directional structures are then captured in the form of a map, which we name the capability map. Using this capability map, a manipulator is able to deduce places that are easy to reach. Furthermore, a manipulator can either transport an object to a place where versatile manipulation is possible or a mobile manipulator or humanoid torso can position itself to enable optimal manipulation of an object.
Conference Paper
Full-text available
This work presents a methodology to generate dynamically stable whole-body motions for a humanoid robot, which are converted from human motion capture data. The methodology consists of the kinematic and dynamical mappings for human-likeness and stability, respectively. The kinematic mapping includes the scaling of human foot and Zero Moment Point (ZMP) trajectories considering the geometric differences between a humanoid robot and a human. It also provides the conversion of human upper body motions using the method in. The dynamic mapping modifies the humanoid pelvis motion to ensure the movement stability of humanoid whole-body motions, which are converted from the kinematic mapping. In addition, we propose a simplified human model to obtain a human ZMP trajectory, which is used as a reference ZMP trajectory for the humanoid robot to imitate during the kinematic mapping. A human whole-body dancing motion is converted by the methodology and performed by a humanoid robot with online balancing controllers.
Conference Paper
Full-text available
This paper presents a method for importing human dance motion into humanoid robots through visual observation. The human motion data is acquired from a motion capture system consisting of 8 cameras and 8 PC clusters. Then the whole motion sequence is divided into motion elements and clustered into groups according to the correlation of end-effector trajectories. We call these segments 'motion primitives'. New dance motions are generated by concatenating these motion primitives. We are also trying to make a humanoid dance these original or generated motions using inverse-kinematics and dynamic balancing techniques.
Article
The term ‘synergy’ – from the Greek synergia – means ‘working together’. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments.
Conference Paper
In this work, we propose a system that has the ability to reproduce imitated motions of human during continuous and online observation with a humanoid robot. In order to achieve this goal, the problems for imitation have to be solved. In this paper, we treat two main issues. One is mapping between different kinematic structures and the other is computing humanoid body pose that satisfies the static stability generated from the human motion obtained by visual motion capture of the humanoid. The experimental results based on Webots simulation and subsequent execution by a Darwin-OP humanoid robot show the validity of the proposed system in this paper.
Article
doi:10.1016/j.mechatronics.2010.11.003 In this paper, a system for transferring human grasping skills to a robot is presented. In order to reduce the dimensionality of the grasp postures, we extracted three synergies from data on human grasping experiments and trained a neural network with the features of the objects and the coefficients of the synergies. Then, the trained neural network was employed to control robot grasping via an individually optimized mapping between the human hand and the robot hand. As force control was unavailable on our robot hand, we designed a simple strategy for the robot to grasp and hold the objects by exploiting tactile feedback at the fingers. Experimental results demonstrated that the system can generalize the transferred skills to grasp new objects. EC
Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning
  • C Stanton
  • A Bogdanovych
  • E Ratanasena
C. Stanton, A. Bogdanovych, and E. Ratanasena.: Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning, Australas. Conf. Robot. Autom. ACRA, pp. 3-5 (2012)
Real-time imitation of human whole-body motions by humanoids
  • J Koenemann
  • F Burget
  • M Bennewitz
J. Koenemann, F. Burget, and M. Bennewitz.: Real-time imitation of human whole-body motions by humanoids, Proc. -IEEE Int. Conf. Robot. Autom., pp. 2806-2812 (2014), doi: 10.1109/ICRA.2014.6907261.
Position teaching of a robot arm by demonstration with a wearable input device
  • J Aleotti
  • T Skoglund
  • Duckett
J. Aleotti, a. Skoglund, and T. Duckett.: Position teaching of a robot arm by demonstration with a wearable input device, Intell. Manip. Grasping, no. May 2014, pp. 459-464 (2004)
Analysis and control of robot manipulators with redundancy
  • T Yoshikawa
T. Yoshikawa.: Analysis and control of robot manipulators with redundancy, Proc. of 1st International Symposium of Robotics Research, MIT Press, Cambridge, MA, pp. 735-747 (1983)