Figure 1 - uploaded by Astrid Jackson
Content may be subject to copyright.
Demonstration trajectories in a Unity game environment. Top: Trajectories recorded via a PS3 controller. Bottom: Trajectories recorded utilizing the HTC Vive VR system. 

Demonstration trajectories in a Unity game environment. Top: Trajectories recorded via a PS3 controller. Bottom: Trajectories recorded utilizing the HTC Vive VR system. 

Source publication
Conference Paper
Full-text available
One of the advantages of teaching robots by demonstration is that it can be more intuitive for users to demonstrate rather than describe the desired robot behavior. However, when the human demonstrates the task through an interface, the training data may inadvertently acquire artifacts unique to the interface, not the desired execution of the task....

Context in source publication

Context 1
... is clear in Figure 1 that the trajectories gathered in VR are signifi- cantly smoother with more continuous motion than those gathered with the PS3 controller. Users favored moving along a single degree of freedom at a time when using the PS3 controller whereas they felt comfortable moving along multiple degrees of freedom simul- taneously using the VR system. This is likely due to the motion of the VR controller being much closer to natural motion of the hand combined with the depth perception afforded by the stereoscopic view in the VR headset. Table 1 (Bottom) shows that it takes considerably less time to perform one demonstration of each task, which leads to more than twice as many demonstrations in the pick-and-place task and more than three times as many demonstrations in the cleanup task. In other words, it is possible to collect two to three times as many independent demonstrations in the same amount of time using ...

Similar publications

Conference Paper
Full-text available
One of the advantages of teaching robots by demonstration is that it can be more intuitive for users to demonstrate rather than describe the desired robot behavior. However, when the human demonstrates the task through an interface, the training data may inadvertently acquire artifacts unique to the interface, not the desired execution of the task....

Citations

... Moreover, for ad-hoc tasks, demonstrating with users' bodies is preferable [17]. Recently, the emerging augmented/virtual reality (AR/VR) technologies, e.g., head-mounted AR/VR devices [1,2], show a strong potential to enable embodied authoring [35]. Further, in HRC tasks, robot partners are desired to adapt to and coordinate with humans' actions. ...
Conference Paper
We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study.
Chapter
Human-robot interaction is a critical area of research, providing support for collaborative tasks where a human instructs a robot to interact with and manipulate objects in an environment.