Conference Paper

The HERMES humanoid system: A platform for full-body teleoperation with balance feedback

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Force feedback, tactile and vibro-tactile feedback are the most used in teleoperation scenarios. The interface providing kinesthetic force feedback can be similar to an exoskeleton [101,192] or can be cable driven. The latter only provides a tension force feedback, while the former provides the force feedback on different directions. ...
... This feedback allows the human to teach the robot how to compliantly interact with the environment. In [192], the feedback force applied to the human's torso is proportional to how close the robot is from tipping over. This is estimated by considering the distance between the robot center of pressure and the edge of the support polygon. ...
... Different authors considered distinct body parts as the target of the mapping. A frequent approach is to map the motion of the human wrist to that of the robot end effectors [55,79,101,192]. A commonly used mapping in the literature is to perform an identity map between the rotational motion of the human and the robot, which usually works because of the anthropomorphic robot design, whereas in the case of translational motion a fixed gain is used [55], to take into account the differences in size. ...
Thesis
This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.
... Depending on whether humans and robots are located in close proximity or not, interaction can happen in two different forms: a) remote interaction when the robot and human are not collocated, b) proximate interaction when the robot and the human are collocated [3]. In either form, especially in remote interaction, the user always needs some feedback from the robot side, such as the information about the state of the robot and the environment in teleoperation [4]. To address this challenge, a number of approaches have been proposed. ...
... In the literature, there are a plenty of papers that provide different HRI techniques through HMDs. In [4], the user receives visual feedback about the state of a teleoperated humanoid robot in the HMD. In another project [7], the body motion of a human is translated as a control input to the robot that is simulated in the VR environment in HMD. ...
... In [4], it is mentioned that the total delay of 175ms of the system loop is sufficient for controlling the balance of the robot. There is no experiment to verify that this time delay will be sufficient enough when the legs are added to the motion capture suit to enable stepping control. ...
Preprint
We propose a novel human-drone interaction paradigm where a user directly interacts with a drone to light-paint predefined patterns or letters through hand gestures. The user wears a glove which is equipped with an IMU sensor to draw letters or patterns in the midair. The developed ML algorithm detects the drawn pattern and the drone light-paints each pattern in midair in the real time. The proposed classification model correctly predicts all of the input gestures. The DroneLight system can be applied in drone shows, advertisements, distant communication through text or pattern, rescue, and etc. To our knowledge, it would be the world's first human-centric robotic system that people can use to send messages based on light-painting over distant locations (drone-based instant messaging). Another unique application of the system would be the development of vision-driven rescue system that reads light-painting by person who is in distress and triggers rescue alarm.
... Force feedback, tactile, and vibro-tactile feedback are the most used in teleoperation scenarios. The interface providing kinesthetic force feedback can be similar to an exoskeleton [49] or can be cable driven. The latter only provides a tension force feedback, while the former provides the force feedback on different directions. ...
... This feedback allows the human to teach the robot how to compliantly interact with the environment. In [49], the feedback force applied to the human's torso is proportional to how close the robot is from tipping over. This is estimated by considering the distance between the robot's CoP and the edge of the support polygon. ...
Article
Teleoperation of humanoid robots enables the integration of the cognitive skills and domain expertise of humans with the physical capabilities of humanoid robots. The operational versatility of humanoid robots makes them the ideal platform for a wide range of applications when teleoperating in a remote environment. However, the complexity of humanoid robots imposes challenges for teleoperation, particularly in unstructured dynamic environments with limited communication. Many advancements have been achieved in the last decades in this area, but a comprehensive overview is still missing. This survey article gives an extensive overview of humanoid robot teleoperation, presenting the general architecture of a teleoperation system and analyzing the different components. We also discuss different aspects of the topic, including technological and methodological advances, as well as potential applications.
... Force feedback, tactile and vibro-tactile feedback are the most used in teleoperation scenarios. The interface providing kinesthetic force feedback can be similar to an exoskeleton [49] or can be cable driven. The latter only provides a tension force feedback, while the former provides the force feedback on different directions. ...
... This feedback allows the human to teach the robot how to compliantly interact with the environment. In [49], the feedback force Fig. 5: General concept for whole-body bilateral teleoperation. The robot WBC computes joint torques τ j using the reference interaction forces F H from the operator and the error e between the human state X H and the robot state X R . ...
Preprint
Full-text available
Teleoperation of humanoid robots enables the integration of the cognitive skills and domain expertise of humans with the physical capabilities of humanoid robots. The operational versatility of humanoid robots makes them the ideal platform for a wide range of applications when teleoperating in a remote environment. However, the complexity of humanoid robots imposes challenges for teleoperation, particularly in unstructured dynamic environments with limited communication. Many advancements have been achieved in the last decades in this area, but a comprehensive overview is still missing. This survey paper gives an extensive overview of humanoid robot teleoperation, presenting the general architecture of a teleoperation system and analyzing the different components. We also discuss different aspects of the topic, including technological and methodological advances, as well as potential applications. A web-based version of the paper can be found at https://humanoid-teleoperation.github.io/.
... Humanoid robots have the potential to aid workers in physically demanding and dangerous jobs such as firefighting and disaster relief [1], [2]. In order to aid in these tasks, humanoid robots must be capable of manipulation and locomotion, while being robust to intermittent contact and disturbances. ...
... The goal of this experiment was to test the performance of Hybrid LMC for locomotion and tracking when given human command signals obtained directly from hardware as a step towards teleoperation (e.g.,HERMES humanoids [1]). This test scenario is important as it suggests viability of using reference trajectories that are irregular and rapidly changing compared to those used in training -5 th order velocity polynomials. ...
Preprint
Control of wheeled humanoid locomotion is a challenging problem due to the nonlinear dynamics and under-actuated characteristics of these robots. Traditionally, feedback controllers have been utilized for stabilization and locomotion. However, these methods are often limited by the fidelity of the underlying model used, choice of controller, and environmental variables considered (surface type, ground inclination, etc). Recent advances in reinforcement learning (RL) offer promising methods to tackle some of these conventional feedback controller issues, but require large amounts of interaction data to learn. Here, we propose a hybrid learning and model-based controller Hybrid LMC that combines the strengths of a classical linear quadratic regulator (LQR) and ensemble deep reinforcement learning. Ensemble deep reinforcement learning is composed of multiple Soft Actor-Critic (SAC) and is utilized in reducing the variance of RL networks. By using a feedback controller in tandem the network exhibits stable performance in the early stages of training. As a preliminary step, we explore the viability of Hybrid LMC in controlling wheeled locomotion of a humanoid robot over a set of different physical parameters in MuJoCo simulator. Our results show that Hybrid LMC achieves better performance compared to other existing techniques and has increased sample efficiency
... Furthermore, they calculated additional reference ZMP informations without the foot force sensor devices they had used before, which relieved operation stress of the operator. In [44,45], the authors proposed, designed and developed a whole-body human-in-loop teleoperation control system with balance feedback, which enabled the operator to control the robot to perform powerful manipulation tasks. Their works are impressive and enlightening to our work. ...
... In the work presented in [42,43], the authors only used the information of the human's end effector and center of mass to generate reference joint angles, and the foot height was also restricted. In [44,45], only upper limbs and hips were considered in the motion mapping, and the motions of the lower limbs subjected to the torso orientation of the operator. The single-support motions were also not mentioned in this work. ...
Article
Full-text available
Due to the limitations on the capabilities of current robots regarding task learning and performance, imitation is an efficient social learning approach that endows a robot with the ability to transmit and reproduce human postures, actions, behaviors, etc., as a human does. Stable whole-body imitation and task-oriented teleoperation via imitation are challenging issues. In this paper, a novel comprehensive and unrestricted real-time whole-body imitation system for humanoid robots is designed and developed. To map human motions to a robot, an analytical method called geometrical analysis based on link vectors and virtual joints (GA-LVVJ) is proposed. In addition, a real-time locomotion method is employed to realize a natural mode of operation. To achieve safe mode switching, a filter strategy is proposed. Then, two quantitative vector-set-based methods of similarity evaluation focusing on the whole body and local links, called the Whole-Body-Focused (WBF) method and the Local-Link-Focused (LLF) method, respectively, are proposed and compared. Two experiments conducted to verify the effectiveness of the proposed methods and system are reported. Specifically, the first experiment validates the good stability and similarity features of our system, and the second experiment verifies the effectiveness with which complicated tasks can be executed. At last, an imitation learning mechanism in which the joint angles of demonstrators are mapped by GA-LVVJ is presented and developed to extend the proposed system.
... Humanoid robots have been in the spotlight for a long time due to their promising potential to address problems in diverse scenarios from elderly care to disaster response [1], [2], [3]. Despite the recent advancements, developing fully autonomous humanoid robots capable of achieving human-level adaptation in navigating harsh terrains and executing physical tasks is still extremely challenging. ...
Preprint
Teleoperation has emerged as an alternative solution to fully-autonomous systems for achieving human-level capabilities on humanoids. Specifically, teleoperation with whole-body control is a promising hands-free strategy to command humanoids but demands more physical and mental effort. To mitigate this limitation, researchers have proposed shared-control methods incorporating robot decision-making to aid humans on low-level tasks, further reducing operation effort. However, shared-control methods for wheeled humanoid telelocomotion on a whole-body level has yet to be explored. In this work, we study how whole-body feedback affects the performance of different shared-control methods for obstacle avoidance in diverse environments. A Time-Derivative Sigmoid Function (TDSF) is proposed to generate more intuitive force feedback from obstacles. Comprehensive human experiments were conducted, and the results concluded that force feedback enhances the whole-body telelocomotion performance in unfamiliar environments but could reduce performance in familiar environments. Conveying the robot's intention through haptics showed further improvements since the operator can utilize the force feedback for short-distance planning and visual feedback for long-distance planning.
... User's dynamics measured by the force plate and exoskeleton system can be used to calculate the feedback force for the user comparing it to the robot's dynamics. This strategy has been employed in the author's previous work on whole-body teleoperation of humanoid robots [15], [16]. ...
Preprint
Full-text available
Robotic systems that can dynamically combine manipulation and locomotion could facilitate dangerous or physically demanding labor. For instance, firefighter humanoid robots could leverage their body by leaning against collapsed building rubble to push it aside. Here we introduce a teleoperation system that targets the realization of these tasks using human whole-body motor skills. We describe a new wheeled humanoid platform, SATYRR, and a novel hands-free teleoperation architecture using a whole-body Human Machine Interface (HMI). This system enables telelocomotion of the humanoid robot using the operator body motion, freeing their arms for manipulation tasks. In this study we evaluate the efficacy of the proposed system on hardware, and explore the control of SATYRR using two teleoperation mappings that map the operators body pitch and twist to the robot velocity or acceleration. Through experiments and user feedback we showcase our preliminary findings of the pilot-system response. Results suggest that the HMI is capable of effectively telelocomoting SATYRR, that pilot preferences should dictate the appropriate motion mapping and gains, and finally that the pilot can better learn to control the system over time. This study represents a fundamental step towards the realization of combined manipulation and locomotion via teleoperation.
... The T2822 has significant gear reduction (51.24:1) similar to [33,78] which provides the benefit of increased mass-specific torque and reduced Joule heating at the cost of decreased transparency. The EC60 has a very modest (4.33:1) planetary gearbox, a strategy also employed by the MIT Cheetah [71] and Hermes [87] machines. The U10DD ...
Article
It has been twenty years since the advent of the first power-autonomous legged robots, yet they have still not yet been deployed at scale. One fundamental challenge in legged machines is that actuators must perform work at relatively high speed in swing but also at high torque in stance. Legged machines must also be able to “feel” the reaction forces in both normal (to switch from swing to stance control) and tangential (to detect slip or stubbing) directions for appropriate gait-level control. This “feeling” can be accomplished by explicit force/torque sensors in the foot/leg/actuator, or by measuring the deflection of a series mechanical spring. In this thesis we analyze machines that obtain this force information directly through the implementation of highly backdriveable actuators that require no additional sensors (apart from those already required for commutation). We address the holistic design of robots with backdriveable actuators including motor, transmission, compliance, degrees of freedom, and leg design. Moreover, this work takes such actuators to the conceptual limit by removing the gearbox entirely and presenting the design and construction of the first direct-drive legged robot family (a monopod, a biped, and a quadruped). The actuator analysis that made these direct-drive machines possible has gained traction in state of the art modestly geared machines (legged robots as well as robot arms), many of which now use the same motors. A novel leg design (the symmetric five-bar, where the “knee” is allowed to ride above the “hip”) decreases the wasted Joule heating by four per unit of torque produced over the workspace compared to a conventional serial design, making the 40 cm hip-to-hip Minitaur platform possible without violating the thermal limit of its motors. A means of comparing actuator transparency (the curve representing collision energy vs. contact information) is presented and is used to compare the performance of actuators with similar continuous torque but vastly different gear ratios (1:1, 4.4:1, 51:1). This transparency can be used to show the different outcomes in a representative task where the actuators must “feel” a ball on a track through contact and then recirculate to “cage” the ball before the energy required to “feel” has caused the ball to roll out of the workspace. For a 50 g rubber ball, the direct drive actuator is able to successfully accomplish the task, but the 4.4:1 actuator is not able to cage the ball in time, and the 51:1 actuator cannot feel the ball at all before pushing it out of the workspace. Finally, the actuation and force measurement/estimation strategies of the three leading commercial legged robots are compared, alongside other considerations for real-world fielded machines. This thesis seeks to show that legged robots (both academic and commercial) whose actuators are designed with careful consideration for proprioception can have similar performance to more conventional machines, with better robustness and greatly reduced complexity.
... Load-sharing policies also explore jointly manipulating objects within simulation [130], and planning methods address safety and efficiency [128,236] or formation control [9]. Physical interaction and teleoperation can also be combined for effective closed-loop control [231]. 58 ...
Thesis
Full-text available
[NOTE: More information and videos, including the thesis defense presentation, are available at https://delpreto-thesis.csail.mit.edu] -------------------- This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans' capabilities and improve quality of life. ------------ [paragraph break] ------------ Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG. ------------ [paragraph break] ------------ Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations. ------------ [paragraph break] ------------ Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy. ------------ [paragraph break] ------------ This thesis thus aims to enhance and simplify human-robot interaction in a variety of settings. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.
... 1) Human Motion Perception: Currently, the use of exoskeletons [40], [41], data gloves [42], and force feedback devices [4] constitutes the most mature and reliable motion perception methods. It is easy to measure the motion parameters of the human body by a motor encoder or angle sensor and to then directly control the joint of the robot. ...
Article
Full-text available
Artificial intelligence (AI) technology has greatly expanded human capabilities through perception, understanding, action, and learning. The future of AI depends on cooperation between humans and AI. In addition to a fully automated or manually controlled machine, a machine can work in tandem with a human with different levels of assistance and automation. Machines and humans cooperate in different ways. Three strategies for cooperation are described in this article, as well as the nesting relationships among different control methods and cooperation strategies. Based on human thinking and behavior, a hierarchical human–machine cooperation (HMC) framework is improved and extended to design safe, efficient, and attractive systems. We review the common methods of perception, decision-making, and execution in the HMC framework. Future applications and trends of HMC are also discussed.
... And the dynamic mass is 2x14 kg, the maximum span is 2x936 mm, and the current control sampling rate is 40Hz, the joint internal is 3 kHz, the Cartesian sampling rate is 1 kHz [30]. The HERMES humanoid system is designed for studying whole-body human-in-the-loop control with balance feedback [31], [32]. These upper body teleoperation systems provide a haptic immersive experience for manipulation with force/torque feedback. ...
... The above examples have one common thread, i.e., obviating the exposure to harm and risk to human safety. Thus, when operating in hazardous environments, in most cases the robots act as a physical extension of their human operators to enhance their dexterity, sensory experience, and cognition (Wang et al., 2015). Endowing a human operator with the ability to utilize the robot to its maximum potential requires the development of intuitive user interfaces for human-robot interaction (HRI). ...
Article
Full-text available
Healthcare workers face a high risk of contagion during a pandemic due to their close proximity to patients. The situation is further exacerbated in the case of a shortage of personal protective equipment that can increase the risk of exposure for the healthcare workers and even non-pandemic related patients, such as those on dialysis. In this study, we propose an emergency, non-invasive remote monitoring and control response system to retrofit dialysis machines with robotic manipulators for safely supporting the treatment of patients with acute kidney disease. Specifically, as a proof-of-concept, we mock-up the touchscreen instrument control panel of a dialysis machine and live-stream it to a remote user’s tablet computer device. Then, the user performs touch-based interactions on the tablet device to send commands to the robot to manipulate the instrument controls on the touchscreen of the dialysis machine. To evaluate the performance of the proposed system, we conduct an accuracy test. Moreover, we perform qualitative user studies using two modes of interaction with the designed system to measure the user task load and system usability and to obtain user feedback. The two modes of interaction included a touch-based interaction using a tablet device and a click-based interaction using a computer. The results indicate no statistically significant difference in the relatively low task load experienced by the users for both modes of interaction. Moreover, the system usability survey results reveal no statistically significant difference in the user experience for both modes of interaction except that users experienced a more consistent performance with the click-based interaction vs. the touch-based interaction. Based on the user feedback, we suggest an improvement to the proposed system and illustrate an implementation that corrects the distorted perception of the instrumentation control panel live-stream for a better and consistent user experience.
... Research has sought to mitigate the effects of remote perception and manipulation by developing various methods of transferring user input to robotic output given a specific set of constraints [13]. Interfaces may include handheld devices such as phones [14] and PDA systems [15], control devices such such as the traditional joystick, motion capture and gesture-based controls [16,17,18,19], or whole-body teleoperation [20]. Oftentimes, teleoperation requires a negotiation between various desired features [21]. ...
Thesis
Full-text available
Teleoperation of an articulated robot in dynamic and human-facing environments may require the operator to produce fluid, expressive, and human-like motion. This work examines the performances and perceptions of two movement profiles generated by different methods of teleoperation via an Xbox One controller. The first method is a traditional method of control in which the angles of individual joints are prescribed in a sequential fashion until the desired configuration of the robot arm is achieved. The second method of teleoperation is a choreography-inspired method of control named Robot Choreography Center (RCC), which utilizes choreographic abstractions from the Laban/Bartenieff Movement System to index a database of poses, allowing for multiple joints in the robot arm to move simultaneously. The two methods of control are compared to one another using performance, perception, and preference metrics collected in two user studies: an in-lab user study and an observer-based perception study. Success rates indicated that both methods of control were over 80% successful for static tasks requiring a specific end configuration while the choreography-inspired (RCC) was an average of 11.85% more successful for dynamic tasks requiring a transfer of momentum to achieve a desired task. These performance-based studies showed that the choreography-inspired method facilitated improved control over the robot even in functional tasks. Further analysis showed that video game exposure was positively correlated with performance level. The preference-based results from the in-lab study described the traditional benchmark method as more precise, easier to use, safer, and more articulate while the choreography-inspired (RCC) method was identified as faster, more fluid, and more expressive. These results led to the development of a perception-based study of observers conducted on a new pool of participants who were asked to select descriptive labels for the movement profiles generated by both methods of teleoperation for static and dynamic tasks. The two methods of control were described similarly when completing static tasks; however, 45% of participants selected the word "human-like" to describe the movement profile generated using the choreography-inspired (RCC) method to complete dynamic tasks. Thus, these results provide initial ideas about how qualitative descriptors of movement, such as "fluid" and \human-like", may be quantified and produced in teleoperated motion through parameters such as number of joints moving simultaneously. Similarly, when comparing the knee joints of both humans and robots, it appears that the natural system has a greater number of points of simultaneous actuation. Future work could further these quantitative models of human-assigned adjectives to motion.
... In contrast, in [6], the task space teleoperated HERMES's arm can move considerably faster and break through a wall barrier with mechanical-linkage-based motion capture. Hence, we assume two keys to dynamic teleoperation of a robotic arm: 1) The motion capture frequency must be sufficiently high (≥ 1 kHz) for relatively fast positional update and accurate velocity and force estimations, which few existing tetherless motion capture systems can achieve. ...
Preprint
Teleoperation (i.e., controlling a robot with human motion) proves promising in enabling a humanoid robot to move as dynamically as a human. But how to map human motion to a humanoid robot matters because a human and a humanoid robot rarely have identical topologies and dimensions. This work presents an experimental study that utilizes reaction tests to compare the proposed joint space mapping and the proposed task space mapping for dynamic teleoperation of an anthropomorphic robotic arm that possesses human-level dynamic motion capabilities. The experimental results suggest that the robot achieved similar and, in some cases, human-level dynamic performances with both mappings for the six participating human subjects. All subjects became proficient at teleoperating the robot with both mappings after practice, despite that the subjects and the robot differed in size and link length ratio and that the teleoperation required the subjects to move unintuitively. Yet, most subjects developed their teleoperation proficiencies more quickly with the task space mapping than with the joint space mapping after similar amounts of practice. This study also indicates the potential values of a three-dimensional task space mapping, a teleoperation training simulator, and force feedback to the human pilot for intuitive and dynamic teleoperation of a humanoid robot's arms.
... Under master-slave teleoperation paradigm, the flow of information is unidirectional from the human to the robot, while under the bilateral teleoperation paradigm there is an exchange of information between the human and the robot. In particular, haptic feedback to the human from the robot (Ishiguro et al., 2017;Wang et al., 2015). Teleoperation systems that involve humans in the control loop at the kinematic and dynamic level should have the prime objectives of situational awareness and transparency, i.e., the human operator experiencing the remote environment of the teleoperated robot as holistically as possible while maintaining the stability of the closed-loop system (Hokayem and Spong, 2006;Lichiardopol, 2007). ...
Preprint
As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task.
... Under master-slave teleoperation paradigm, the flow of information is unidirectional from the human to the robot, while under bilateral teleoperation paradigm there is an exchange of information between the human and the robot. In particular, haptic feedback to the human from the robot [12], [13]. Teleoperation systems that involve humans in the control loop at the kinematic and dynamic level should have the prime objectives of situational awareness and transparency, i.e., the human operator experiencing the remote environment of the teleoperated robot as holistically as possible, while maintaining the stability of the closed-loop system [14], [1]. ...
... The T2822 has significant gear reduction (51.24:1) similar to [13,14] which provides the benefit of increased mass-specific torque and reduced Joule heating at the cost of decreased transparency. The EC60 has a very modest (4.33:1) planetary gearbox, a strategy also employed by the MIT Cheetah [5] and Hermes [15] machines. The U10DD represents the DD strategy employed in manipulators [16] and legged machines, including Minitaur [6] and the Penn Jerboa [12], which benefit from excellent bandwidth and transparency at the cost of increased Joule heating. ...
Article
In the field of haptics, conditions for mechanical “transparency”[1] entail such qualities as “solid virtual objects must feel stiff” and “free space must feel free”[2], suggesting that a suitable actuator is able both to do work and readily have work done on it. In this context, seeking actuator transparency has come to mean a preference for minimal dynamics [3] or no impedance [4]. While such general notions seem satisfactory for a haptic interface, actuators with good mechanical transparency are now being used in high-performance robots [5, 6] where once again they must be able to do work, but are now also expected to perceive their environment by processing signals related to contact forces in the leg or manipulator when an explicit force sensor is not present. As robotics researchers develop models [7] suitable for programming behaviors that require systematic making and breaking of contact within the environments on which they perform work, actuators must be capable of: (a) generating the high forces at speed needed to accelerate the body during locomotion [5]; (b) robustness to high forces and impacts during locomotion [8]; (c) perceiving high force events quickly, such as touchdown in stance [9]; (d) perceiving contact quickly without exerting significant force on the object, such as in gentle manipulation [10]; and (e) reacting quickly during time-sensitive behaviors [11]. This work aims to describe a quantitative assay of transparency that might, for example, predict the advantage in proprioceptive tasks of an electromagnetic directdrive (DD) motor (i.e., one without gearbox), relative to actuation schemes consisting of both a motor and a geared reduction. Specifically, we explore the prospects for characterizing transparency as revealed by comparing the energetic cost of “feeling” the environment. Our sample proprioceptive task is instantiated by a simple torque estimator in Sec. 2. This scheme is then instrumented in simple contact detection experiments paired with a model to empirically explore the relationships between collision energy and detection time delay in Sec. 3. The actuators are then tested with a feel-cage task to illustrate the advantage of good transparency in Sec. 4. “For more information: Kod*lab (link to kodlab.seas.upenn.edu)
... Under master-slave teleoperation paradigm, the flow of information is unidirectional from the human to the robot, while under bilateral teleoperation paradigm there is an exchange of information between the human and the robot. In particular, haptic feedback to the human from the robot [12], [13]. Teleoperation systems that involve humans in the control loop at the kinematic and dynamic level should have the prime objectives of situational awareness and transparency, i.e., the human operator experiencing the remote environment of the teleoperated robot as holistically as possible, while maintaining the stability of the closed-loop system [14], [1]. ...
Preprint
Full-text available
Humanoid robot teleoperation allows humans to integrate their cognitive capabilities with the apparatus to perform tasks that need high strength, manoeuvrability and dexterity. This paper presents a framework for teleoperation of humanoid robots using a novel approach for motion retargeting through inverse kinematics over the robot model. The proposed method enhances scalability for retargeting, i.e., it allows teleoperating different robots by different human users with minimal changes to the proposed system. Our framework enables an intuitive and natural interaction between the human operator and the humanoid robot at the configuration space level. We validate our approach by demonstrating whole-body retargeting with multiple robot models. Furthermore, we present experimental validation through teleoperation experiments using two state-of-the-art whole-body controllers for humanoid robots.
... Doherty, Heintz, & Kvarnström, 2013) to distribute tasks to a potentially heterogeneous team of agents. Further extending the concept of wearable interfaces,Wang et al. (2015) developed an exoskeleton for the whole-body human-in-the-loop teleoperation of a humanoid robot for SAR. In addition to visual feedback, the exoskeleton applies forces on the waist of the operator To display the state of balance of the robot, hence eliciting correctiveteleoperated actions. ...
Article
Robotic technologies, whether they are remotely operated vehicles, autonomous agents, assistive devices, or novel control interfaces, offer many promising capabilities for deployment in real‐world environments. Postdisaster scenarios are a particularly relevant target for applying such technologies, due to the challenging conditions faced by rescue workers and the possibility to increase their efficacy while decreasing the risks they face. However, field‐deployable technologies for rescue work have requirements for robustness, speed, versatility, and ease of use that may not be matched by the state of the art in robotics research. This paper aims to survey the current state of the art in ground and aerial robots, marine and amphibious systems, and human–robot control interfaces and assess the readiness of these technologies with respect to the needs of first responders and disaster recovery efforts. We have gathered expert opinions from emergency response stakeholders and researchers who conduct field deployments with them to understand these needs, and we present this assessment as a way to guide future research toward technologies that will make an impact in real‐world disaster response and recovery.
... Using the FlyJacket to control a drone is one form of teleoperation. The cable-driven haptic guidance in the form of kinesthetic feedback studied is this article can be used in other types of teleoperation, such as the balance of a bipedal humanoid robot like the Hermes robot [34] or, if placed on the users arm, kinesthetic feedback during teleoperation of a robotic arm. Carine Rognon received the B.Sc. degree in microengineering from the Ecole Polytechnique F ed erale de Lausanne, Lausanne, Switzerland, in 2013 and the M.Sc. ...
Article
Robotics teleoperation enables human operators to control the movements of distally located robots. The development of new wearable interfaces as alternatives to hand-held controllers has created new modalities of control, which are more intuitive to use. Nevertheless, such interfaces also require a period of adjustment before operators can carry out their tasks proficiently. In several fields of human-machine interaction, haptic guidance has proven to be an effective training tool for enhancing user performance. This work presents the results of psychophysical and motor learning studies that were carried out with human subject to assess the effect of cable-driven haptic guidance for a task involving aerial robotic teleoperation. The guidance system was integrated into an exosuit, called the Flyjacket, that was developed to control drones with torso movements. Results for the Just Noticeable Difference (JND) and from the Stevens Power Law suggest that the perception of force on the users' torso scales linearly with the amplitude of the force exerted through the cables and the perceived force is close to the magnitude of the stimulus. Motor learning studies reveal that this form of haptic guidance improves user performance in training, but this improvement is not retained when subjects are evaluated without guidance.
... For example, humanoids Atlas [4], and TaeMu [5] from Boston Dynamics utilize hydraulic and hydraulic hybrid servo actuators for their robotic arms with high precision and large payloads. Humanoid HERMES uses specially designed high torque density electromagnetic motors for the manipulation of its arms and legs [6], [7]. Most recently, the soft robotics community has payed attention to using pneumatics and artificial muscles to design robotic arms [8]. ...
Conference Paper
Full-text available
A robotic arm is one of the most sophisticated components of a humanoid, due to its complexity in multi-degree-of-freedom actuation and sensing, size and weight constraints, and requirement for object manipulation. This paper talks about the design, development, and verification of a low-cost, light-weight robotic manipulator that can achieve anthropomorphic movements. The 5 degree-of-freedom robotic arm has a fully extended length of 31 – inches and weight of 7 - pounds. The joints of the arm were fabricated using mainly 3D printed parts using Polylactic Acid and Nylon and linked with carbon fiber tubing. The arm is actuated by 2 servo motors at the distal joint and 3 brushless DC motors at the proximal joints. All joints of the arm perform at zero backlash through harmonic gear boxes, which are also assembled mainly from 3D printed parts. The robotic arm has demonstrated a comparable performance to similar robotic arms on the market with significantly reduced cost.
... This would permit the operator to explore the robot's surrounding as well as to receive visual feedback in delicate manipulation tasks. An approach integrating these components in a whole-body motion imitation framework has already been presented by Wang et al. [143]. Ultimately, a full virtual reality experience could be realized using a head-mounted display, e.g. the Oculus Rift system [82], showing the robot's on-board 120 CHAPTER 7. CONCLUSIONS camera view, in combination with an omnidirectional treadmill for teleoperating navigation actions. ...
Thesis
Mobile manipulators are highly dexterous robotic units, unifying the navigation capabilities of mobile platforms with the manipulation capabilities of classical industrial robotic arms. Thus, they are nowadays expected to be able to operate in versatile domains, cope with challenging environments and to be flexible regarding the tasks assigned to them. Establishing efficient motion planning and control strategies for such systems, on the other hand, is particularly challenging due to their high number of degrees of freedom and the multitude of task and platform related constraints involved. The core capabilities required by a mobile robotic system to successfully complete a mobile manipulation task are to be able to determine where it needs to go in the environment, how to get there without colliding with obstacles and to ensure that its motions are compliant with possible task-related constraints. Moreover, it needs to provide an appropriate interface to permit human operators to specify what it needs to do. In this context, we present in this thesis several novel contributions to the field of motion imitation and generation for mobile robotic systems. We hereby consider different levels of autonomy for the robot, initially relying on a human operator to provide the knowledge required to complete a task successfully towards a robotic service assistant capable of autonomously planning and executing mobile manipulation actions. Moreover, we incorporate motion imitation techniques to compare motion demonstrations of healthy subjects with the ones of patients exhibiting motor control deficits. Motion imitation and generation are both valuable approaches, as each of them offers its own individual advantages. Therefore, preference to the appropriate technique should be given depending on the intended field of application. At the beginning, we introduce an approach that permits humanoid robots to imitate wholebody motions captured from a human operator in real time. Hereby, the robot is able to perform motions involving extended periods of time in which the robot needs to balance on a single leg. For our investigation on the underlying principles of human motor control behavior, we rely on motion capture data recorded from human demonstrations. More specifically, we quantitatively evaluate and compare the motion control strategies adopted by two groups, i.e., healthy subjects and Parkinson’s disease patients. Additionally, we develop an approach that lets mobile robotic platforms autonomously select an optimal stance pose for preparation or execution of a subsequent mobile manipulation task by adopting the concept of inverse reachability maps. In the following, we present a probabilistic motion planning framework for generating asymptotically optimal paths for mobile manipulation tasks. This framework extends previous planning approaches in the field towards bidirectional search and satisfaction of arbitrary geometric end-effector task constraints. Finally, we present a mobile robotic service assistant framework composed of several interdisciplinary components that permits users with limited communication skills to express their desire using only thoughts. All techniques developed in this thesis were practically implemented and thoroughly evaluated. The overall contribution of the present work is to equip mobile robotic platforms with the ability to imitate complex whole-body motions as well as to generate them autonomously.
... As shown in Fig. 1, in order to measure human motion in real time, the operator interacts with the BFI: a low latency MoCap device (up to 3 kHz sampling) that can apply large feedback forces to the human near the CoM [22], [23]. This HMI allows unconstrained 6-DoF motion of the torso within the workspace while the feedback forces act on the transverse plane. ...
Article
This paper presents a method to achieve human and legged robot dynamic synchronization through bilateral feedback teleoperation. Our study shows how we can explore the interplay between human Extrapolated Center of Mass and the contact forces with the environment in order to transmit to the robot the underlying balancing and stepping strategy. All the necessary key equations for the frontal plane coupled dynamics are presented along with the human feedback law derived from the proposed state normalization in length and time. Here, we pay special attention to how the natural frequency of each system influences the resulting motion and analyze how the coupled system responds to various robot sizes. Experiments in which a human operator controls a simulated bipedal robot show how the Balance Feedback Interface force varies according to different scales and responds to external disturbances. Finally, we show the method’s robustness to uneven terrain and how we can allow the point feet robot to synchronously take steps with the operator. This is an introductory study that aims to grant legged robots motor capabilities for power manipulation comparable to humans.
Article
Full-text available
Specifying leg placement is a key element for legged robot control, however current methods for specifying individual leg motions with human-robot interfaces require mental concentration and the use of both arm muscles. In this paper, a new control interface is discussed to specify leg placement for hexapod robot by using finger motions. Two mapping methods are proposed and tested with lab staff, Joint Angle Mapping (JAM) and Tip Position Mapping (TPM). The TPM method was shown to be more efficient. Then a manual controlled gait based on TPM is compared with fixed gait and camera-based autonomous gait in a Webots simulation to test the obstacle avoidance performance on 2D terrain. Number of Contacts (NOC) for each gait are recorded during the tests. The results show that both the camera-based autonomous gait and the TPM are effective methods in adjusting step size to avoid obstacles. In high obstacle density environments, TPM reduces the number of contacts to 25% of the fixed gaits, which is even better than some of the autonomous gaits with longer step size. This shows that TPM has potential in environments and situations where autonomous footfall planning fails or is unavailable. In future work, this approach can be improved by combining with haptic feedback, additional degrees of freedom and artificial intelligence.
Article
Motion scaling is an essential technique in robotic surgical systems adopting the leader-follower configuration. By properly reducing the scaling factor, the surgeon can magnify the motion resolution that a human cannot achieve. However, manually tuning the scaling factor distracts the surgeon during the operation. Hence, adaptive methods were introduced to adjust the scaling factor autonomously, despite increasing the system's complexity and leaving more parameters to be designed. We propose a novel framework enabling a systematic design of the motion scaling auto-tuner to address this problem. First, the leader-follower configurated teleoperation is modeled as a human-in-loop control system. Then, we attain the motion scaling auto-tuner by model-matching based filter design. The proposed method is also integrated with virtual fixture techniques, which improve the safety of surgical tasks via haptic feedback. Finally, experiments are conducted for performance evaluation and comparison. The task completion time and other evaluation metrics are effectively improved with the systematic design framework.
Article
Granular media, like sand, restricts vehicle traction and degrades the ability of mobile robots to pull loads. Solutions that are effective on hard or clean surfaces fail on sand. This work presents a mobile robot system that utilizes an active tail to greatly increase its ability to pull loads. The robot combines dynamic tail impacts, plowing, and leveraging of cable tension to achieve this improved performance. The key contributions of this paper are 1) designing tail actuation for dynamic impact, 2) illustrating how dynamic impacts can provide greater anchoring force, 3) combining tail and wheel behaviors for increased anchoring, 4) leveraging geometry to utilize winch tension for enhanced performance, and 5) experimentally demonstrating large payload transport capacity on a granular media. The prototype 7.6kg robot was able to pull a 45.5kg load on a deep sand surface. This is a substantial improvement over the baseline robot which was only able to pull a 13.6kg load.
Article
Purpose This paper aims to present a natural human–robot teleoperation system, which capitalizes on the latest advancements of monocular human pose estimation to simplify scenario requirements on heterogeneous robot arm teleoperation. Design/methodology/approach Several optimizations in the joint extraction process are carried on to better balance the performance of the pose estimation network. To bridge the gap between human joint pose in Cartesian space and heterogeneous robot joint angle pose in Radian space, a routinized mapping procedure is proposed. Findings The effectiveness of the developed methods on joint extraction is verified via qualitative and quantitative experiments. The teleoperation experiments on different robots validate the feasibility of the system controlling. Originality/value The proposed system provides an intuitive and efficient human–robot teleoperation method with low-cost devices. It also enhances the controllability and flexibility of robot arms by releasing human operator from motion constraints, paving a new way for effective robot teleoperation.
Preprint
Full-text available
Drone teleoperation is usually accomplished using remote radio controllers, devices that can be hard to master for inexperienced users. Moreover, the limited amount of information fed back to the user about the robot's state, often limited to vision, can represent a bottleneck for operation in several conditions. In this work, we present a wearable interface for drone teleoperation and its evaluation through a user study. The two main features of the proposed system are a data glove to allow the user to control the drone trajectory by hand motion and a haptic system used to augment their awareness of the environment surrounding the robot. This interface can be employed for the operation of robotic systems in line of sight (LoS) by inexperienced operators and allows them to safely perform tasks common in inspection and search-and-rescue missions such as approaching walls and crossing narrow passages with limited visibility conditions. In addition to the design and implementation of the wearable interface, we performed a systematic study to assess the effectiveness of the system through three user studies (n = 36) to evaluate the users' learning path and their ability to perform tasks with limited visibility. We validated our ideas in both a simulated and a real-world environment. Our results demonstrate that the proposed system can improve teleoperation performance in different cases compared to standard remote controllers, making it a viable alternative to standard Human-Robot Interfaces.
Chapter
In the field of haptics, conditions for mechanical “transparency” [1] entail such qualities as “solid virtual objects must feel stiff” and “free space must feel free” [2], suggesting that a suitable actuator is able both to do work and readily have work done on it.
Article
In this paper, we propose a whole-body remote control framework that enables a robot to imitate human motion efficiently. The framework is divided into kinematic mapping and quadratic programming based whole-body inverse kinematics. In the kinematic mapping, the human motion obtained through a data acquisition device is transformed into a reference motion that is suitable for the robot to follow. To address differences in the kinematic configuration and dynamic properties of the robot and human, quadratic programming is used to calculate the joint angles of the robot considering self-collision, joint limits, and dynamic stability. To address dynamic stability, we use constraints based on the divergent component of motion and zero moment point in the linear inverted pendulum model. Simulation using Choreonoid and a locomotion experiment using the HUBO2+ demonstrate the performance of the proposed framework. The proposed framework has the potential to reduce the preview time or offline task computation time found in previous approaches and hence improve the similarity of human and robot motion while maintaining stability.
Article
Operating a high degree of freedom mobile manipulator, such as a humanoid, in a field scenario requires constant situational awareness, capable perception modules, and effective mechanisms for interactive motion planning and control. A well-designed operator interface presents the operator with enough context to quickly carry out a mission and the flexibility to handle unforeseen operating scenarios robustly. By contrast, an unintuitive user interface can increase the risk of catastrophic operator error by overwhelming the user with unnecessary information. With these principles in mind, we present the philosophy and design decisions behind Director—the open-source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge (DRC). At the heart of Director is an integrated task execution system that specifies sequences of actions needed to achieve a substantive task, such as drilling a wall or climbing a staircase. These task sequences, developed a priori, make online queries to automated perception and planning algorithms with outputs that can be reviewed by the operator and executed by our whole-body controller. Our use of Director at the DRC resulted in efficient high-level task operation while being fully competitive with approaches focusing on teleoperation by highly-trained operators. We discuss the primary interface elements that comprise the Director and provide analysis of its successful use at the DRC.
Article
As we witnessed in recent major disasters, the functionalities of robots in disaster environments do not appear to meet the high level of expectation from the public. This paper reviews robotic operations in disaster situations and open issues with the current robotic technologies. We particularly address fundamental problems with teleoperated ground robots for disaster response and recovery: design of robot platforms and balance between human supervisory control and robotic anatomy. In attempt to alleviate the problems, this paper suggests enabling technologies to improve the effectiveness of robotic systems for disaster response and recovery missions.
Article
Such a teleoperating system has been needed that enables scientists on the earth to make a moon-exploring robot carry out geological explorations on the moon by means of the same movements as their movements on the earth. Studies have been carried out on moon-exploring robot teleoperating systems to reproduce the operator’s movements. However, an existing complete-contact type of unit for measuring the movements has disadvantages in that it is large-sized, requiring skill acquirement and it is likely to restrict the operator’s free movements, making precise movements impossible. To overcome these disadvantages, we have made a study on a teleoperating system equipped with an almost non-contact type of movement measuring unit. It was verified whether our originally-developed teleoperating system proposed herein might be capable of making the robot perform fine hand movement tasks with no need for skill acquirement and no restriction on operator’s movements. The result demonstrated that the proposed teleoperating system is capable of manipulating the robot by means of operator’s movements, which reproduce those in geological explorations on the moon.
Article
This paper describes the experimental realization of dynamic walking by a six-legged robot, Little Crabster, on uneven terrain. Dynamic walking is achieved through the processes of walking pattern generation and posture stabilization. A wave gait that sequentially moves the legs with the greatest degree of walking stability is chosen as the walking pattern, and predesigned gait trajectories are generated according to the proposed walking parameters. In addition, the pattern is modified online through the use of ground contact information from the six feet. Posture stabilization consists of CoP (Center of Pressure) control to maintain a dynamic balance against external disturbances, body posture control to maintain the level body, and landing control to adapt to uneven ground with a small landing impact. These controls are addressed in detail. Finally, the performance of the proposed six-legged walking algorithm is experimentally verified through a walking experiment on a large treadmill with a global slope and obstacles on the floor.
Article
Operating a high degree of freedom mobile manipulator, such as a humanoid, in a field scenario requires constant situational awareness, capable perception modules, and effective mechanisms for interactive motion planning and control. A well-designed operator interface presents the operator with enough context to quickly carry out a mission and the flexibility to handle unforeseen operating scenarios robustly. By contrast, an unintuitive user interface can increase the risk of catastrophic operator error by overwhelming the user with unnecessary information. With these principles in mind, we present the philosophy and design decisions behind Director—the open-source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge (DRC). At the heart of Director is an integrated task execution system that specifies sequences of actions needed to achieve a substantive task, such as drilling a wall or climbing a staircase. These task sequences, developed a priori, make online queries to automated perception and planning algorithms with outputs that can be reviewed by the operator and executed by our whole-body controller. Our use of Director at the DRC resulted in efficient high-level task operation while being fully competitive with approaches focusing on teleoperation by highly trained operators. We discuss the primary interface elements that comprise Director, and we provide an analysis of its successful use at the DRC.
Conference Paper
Full-text available
This paper considers the design of state estimators for dynamic balancing systems using a Linear Inverted Pendulum model with unknown modeling errors such as a center of mass measurement offset or an external force. A variety of process and output models are constructed and compared. For a system containing modeling error, it is shown that a naive estimator (one that doesn't account for this error) will result in inaccurate state estimates. These state estimators are evaluated on a force-controlled humanoid robot for a sinusoidal swaying task and a forward push recovery task.
Conference Paper
Full-text available
This paper presents a control framework for humanoid robots that uses all joints simultaneously to track motion capture data and maintain balance. The controller comprises two main components: a balance controller and a tracking controller. The balance controller uses a regulator designed for a simplified humanoid model to obtain the desired input to keep balance based on the current state of the robot. The simplified model is chosen so that a regulator can be designed systematically using, for example, optimal control. An example of such controller is a linear quadratic regulator designed for an inverted pendulum model. The desired inputs are typically the center of pressure and/or torques of some representative joints. The tracking controller then computes the joint torques that minimize the difference from desired inputs as well as the error from desired joint accelerations to track the motion capture data, considering exact full-body dynamics. We demonstrate that the proposed controller effectively reproduces different styles of storytelling motion using dynamics simulation considering limitations in hardware.
Conference Paper
Full-text available
It is known that for a large magnitude push a human or a humanoid robot must take a step to avoid a fall. Despite some scattered results, a principled approach towards "when and where to take a step" has not yet emerged. Towards this goal, we present methods for computing capture points and the capture region, the region on the ground where a humanoid must step to in order to come to a complete stop. The intersection between the capture region and the base of support determines which strategy the robot should adopt to successfully stop in a given situation. Computing the capture region for a humanoid, in general, is very difficult. However, with simple models of walking, computation of the capture region is simplified. We extend the well-known linear inverted pendulum model to include a flywheel body and show how to compute exact solutions of the capture region for this model. Adding rotational inertia enables the humanoid to control its centroidal angular momentum, much like the way human beings do, significantly enlarging the capture region. We present simulations of a simple planar biped that can recover balance after a push by stepping to the capture region and using internal angular momentum. Ongoing work involves applying the solution from the simple model as an approximate solution to more complex simulations of bipedal walking, including a 3D biped with distributed mass.
Conference Paper
Full-text available
This video presents a telepresence system which enables a human operator to explore a remote environment by means of a multimodal man machine interface and rollin' JUSTIN as teleoperator. The man machine interface allows for bimanual, dexterous manipulation and, through two different operating modi of the man machine interface, wide area movement as well. A bimanual assembly task, consisting of grasping a connector, opening and closing it again, is shown in this video.
Article
Full-text available
This paper is devoted to the permanence of the concept of Zero-Moment Point, widely-known by the acronym ZMP. Thirty-five years have elapsed since its implicit presentation (actually before being named ZMP) to the scientific community and thirty-three years since it was explicitly introduced and clearly elaborated, initially in the leading journals published in English. Its first practical demonstration took place in Japan in 1984, at Waseda University, Laboratory of Ichiro Kato, in the first dynamically balanced robot WL-10RD of the robotic family WABOT. The paper gives an in-depth discussion of source results concerning ZMP, paying particular attention to some delicate issues that may lead to confusion if this method is applied in a mechanistic manner onto irregular cases of artificial gait, i.e. in the case of loss of dynamic balance of a humanoid robot. After a short survey of the history of the origin of ZMP a very detailed elaboration of ZMP notion is given, with a special review concerning "boundary cases" when the ZMP is close to the edge of the support polygon and "fictious cases" when the ZMP should be outside the support polygon. In addition, the difference between ZMP and the center of pressure is pointed out. Finally, some unresolved or insufficiently treated phenomena that may yield a significant improvement in robot performance are considered.
Article
This paper presents the design principles for highly efficient legged robots, the implementation of the principles in the design of the MIT Cheetah, and the analysis of the high-speed trotting experimental results. The design principles were derived by analyzing three major energy-loss mechanisms in locomotion: heat losses from the actuators, friction losses in transmission, and the interaction losses caused by the interface between the system and the environment. Four design principles that minimize these losses are discussed: employment of high torque-density motors, energy regenerative electronic system, low loss transmission, and a low leg inertia. These principles were implemented in the design of the MIT Cheetah; the major design features are large gap diameter motors, regenerative electric motor drivers, single-stage low gear transmission, dual coaxial motors with composite legs, and the differential actuated spine. The experimental results of fast trotting are presented; the 33-kg robot runs at 22 km/h (6 m/s). The total power consumption from the battery pack was 973 W and resulted in a total cost of transport of 0.5, which rivals running animals' at the same scale. 76% of the total energy consumption is attributed to heat loss from the motor, and the remaining 24% is used in mechanical work, which is dissipated as interaction loss as well as friction losses at the joint and transmission.
Article
This paper deals with a new type of soft gripper which can softly and gently conform to objects of any shape and hold them with uniform pressure. This gripping function is realized by means of a mechanism consisting of multi-links and series of pulleys which can be simply actuated by a pair of wires. The possibilities of this gripper are demonstrated by a pair of mechanical model.
Conference Paper
Marionette system provides an intuitive teleoperation system for the difficulty controlling whole-body motion of a multi-joint robot and the complicated observation of its condition. This system employs a small robot which has similar form to a control target as an operating/displaying device, so that the device provides an operational feeling like manipulating a doll, so it is named Marionette device. Since the characteristics of the Marionette device and the target robot are synchronized bilaterally in the system, the operation of the Marionette device is reflected intuitively in the target, and the robot motion is also displayed simultaneously by the Marionette device. In this paper, we develop a humanoid type device as a Marionette device in order to operate a humanoid robot HRP-2 and implement a wholebody teleoperation method. The remote walking and manipulation experiments in an unknown environment are introduced. In this experiment, the operator should guess the remote environment using head camera view and control legs and arms by the Marionette device. We propose a new operation method for controlling the foot position and leg joints with the Marionette device. It makes possible to walk by pointing a foot stamp step by step in a stable place
Conference Paper
This paper presents a method for mapping captured human motion with stepping to a humanoid model, considering the current state and the controller behavior. The mapping algorithm modifies the joint angle, trunk and center of mass (COM) trajectories so that the motion can be tracked and desired contact states can be achieved. The mapping is performed in two steps. The first step modifies the joint angle and trunk trajectories to adapt to the robot kinematics and actual contact foot positions. The second step uses a predicted center of pressure (COP) to determine if the balance controller can successfully maintain the robot's balance, and if not, modifies the COM trajectory. Unlike most humanoid control work that handles motion synthesis and control separately, our COM trajectory modification is performed based on the behavior of the robot controller. We verify the approach in simulation using a captured Tai-chi motion that involves unstructured contact state changes.
Article
Principles of Neural Science
Article
This paper describes mechanisms used by humans to stand on moving platforms, such as a bus or ship, and to combine body orientation and motion information from multiple sensors including vision, vestibular, and proprioception. A simple mechanism, sensory re-weighting, has been proposed to explain how human subjects learn to reduce the effects of inconsistent sensors on balance. Our goal is to replicate this robust balance behavior in bipedal robots. We review results exploring sensory re-weighting in humans and describe implementations of sensory re-weighting in simulation and on a robot.
Conference Paper
We review some general problems underlying sensorimotor transformations for biological control of limb movements. We present evidence that limb kinematics can be controlled independently of kinetics. Ill-defined inverse transformations from endpoint to joint coordinates are solved by means of kinematic constraints, such as a law of planar inter-segmental co-ordination, or by means of optimization principles. Hybrid feedback/feedforward control schemes are used whenever possible. Finally internal models mapping motor commands onto their sensory consequences and viceversa are used to improve estimates and to learn new tasks.
Article
This paper introduces a framework for whole-body motion generation integrating operator's control and robot's autonomous functions during online control of humanoid robots. Humanoid robots are biped machines that usually possess multiple degrees of freedom (DOF). The complexity of their structure and the difficulty in maintaining postural stability make the whole-body control of humanoid robots fundamentally different from fixed-base manipulators. Getting hints from human conscious and subconscious motion generations, the authors propose a method of generating whole-body motions that integrates the operator's command input and the robot's autonomous functions. Instead of giving commands to all the joints all the time, the operator selects only the necessary points of the humanoid robot's body for manipulation. This paper first explains the concept of the system and the framework for integrating operator's command and autonomous functions in whole-body motion generation. Using the framework, autonomous functions were constructed for maintaining postural stability constraint while satisfying the desired trajectory of operation points, including the feet, while interacting with the environment. Finally, this paper reports on the implementation of the proposed method to teleoperate two 30-DOF humanoid robots, HRP-1S and HRP-2, by using only two 3-DOF joysticks. Experiments teleoperating the two robots are reported to verify the effectiveness of the proposed method.
Real-time imitation of human whole-body motions by humanoids
  • J Koenemann
  • F Burget
  • M Bennewitz