Conference PaperPDF Available

Fusion: full body surrogacy for collaborative communication

Authors:

Abstract

Effective communication is a key factor in social and professional contexts which involve sharing the skills and actions of more than one person. This research proposes a novel system to enable full body sharing over a remotely operated wearable system, allowing one person to dive into someone's else body. "Fusion" enables body surrogacy by sharing the same point of view of two-person: a surrogate and an operator, and it extends the limbs mobility and actions of the operator using two robotic arms mounted on the surrogate body. These arms can be used independently of the surrogate arms for collaborative scenarios or can be linked to surrogate's arms to be used in remote assisting and supporting scenarios. Using Fusion, we realize three levels of bodily driven communication: Direct, Enforced, and Induced. We demonstrate through this system the possibilities of truly embodying and transferring our body actions from one person to another, realizing true body communication.
Fusion
Full Body Surrogacy for Collaborative Communication
MHD Yamen Saraiji
Keio University
Graduate School of Media Design
yamen@kmd.keio.ac.jp
Tomoya Sasaki
Research Center for Advanced
Science and Technology
The University of Tokyo
sasaki@star.rcast.u-tokyo.ac.jp
Reo Matsumura
Research Center for Advanced
Science and Technology
The University of Tokyo
reo@krkrpro.com
Kouta Minamizawa
Keio University
Graduate School of Media Design
kouta@kmd.keio.ac.jp
Masahiko Inami
Research Center for Advanced
Science and Technology
The University of Tokyo
inami@inamilab.info
B C
A
Figure 1: “Fusion” used as a bodily driven communication system of an operator and a surrogate. Three levels of communica-
tion are realized: (A) Direct actions using gestures and indications, (B) Enforced postures by forcing surrogate body to certain
positions, and (C) Induced motions by altering the perception of body posture (red arrows represent the induced motion).
ABSTRACT
Eective communication is a key factor in social and professional
contexts which involve sharing the skills and actions of more than
one person. This research proposes a novel system to enable full
body sharing over a remotely operated wearable system, allowing
one person to dive into someone’s else body. “Fusion” enables body
surrogacy by sharing the same point of view of two-person: a surro-
gate and an operator, and it extends the limbs mobility and actions
of the operator using two robotic arms mounted on the surrogate
body. These arms can be used independently of the surrogate arms
for collaborative scenarios or can be linked to surrogate’s arms to
be used in remote assisting and supporting scenarios. Using Fu-
sion, we realize three levels of bodily driven communication: Direct,
Enforced, and Induced. We demonstrate through this system the
possibilities of truly embodying and transferring our body actions
from one person to another, realizing true body communication.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
SIGGRAPH ’18 Emerging Technologies, August 12-16, 2018, Vancouver, BC, Canada
©2018 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-5810-1/18/08.
https://doi.org/10.1145/3214907.3214912
CCS CONCEPTS
Human-centered computing Collaborative interaction
;
Applied computing Telecommunications;
KEYWORDS
Body Scheme Alternation, Collaborative Systems, Surrogacy
ACM Reference Format:
MHD Yamen Saraiji, Tomoya Sasaki, Reo Matsumura, Kouta Minamizawa,
and Masahiko Inami. 2018. Fusion: Full Body Surrogacy for Collaborative
Communication. In Proceedings of SIGGRAPH ’18 Emerging Technologies.
ACM, Vancouver, CA, 2 pages. https://doi.org/10.1145/3214907.3214912
1 INTRODUCTION
In collaborative and cooperative tasks, eective communication
plays an important role in bridging the skills and knowledge be-
tween multiple people. A common say: “being in someone’s shoes”
reects our need for empathy and understanding the context from
someone else’s point of view, resolving any ambiguities or chal-
lenges of communication.
During our daily communication with others, we rely on direct
communications, such as verbal and body language, to express the
internal thoughts and experiences. In scenarios which involve mo-
tor skills learning and body postural adjustment (e.g. dancing), a
SIGGRAPH ’18 Emerging Technologies, August 12-16, 2018, Vancouver, BC, Canada M.Y. Saraiji et al.
trainer would adjust physical the postural of the trainee by enforc-
ing his body to the correct posture. Also, the trainer might guide
the body movement by inducing forces to direct the trainee to fol-
low a certain route or sequence of actions. These three levels of
communications: Directed, Enforced and Induced are bodily driven,
which means they require the active involvement of body actions
to communicate our intentions. In remote situations, such collabo-
rative tasks become more challenging due to the lack of means to
represent our body actions.
In this research, we present “Fusion”, a novel wearable system
that can be used to achieve full body surrogacy, producing eec-
tive body driven communications. Fusion enables two people to
share the same point of view with the capacity to reproduce body
motion of an operator into the surrogate, enabling the operator to
eectively communicate and collaborate remotely. Figure 1 shows
the three levels of communication achieved using Fusion.
2 RELATED WORK
A body of work has explored the use of shared the same point
of view of other people for the purpose of remote collaboration.
[Kasahara and Rekimoto 2015] realized the concept of Jacking In
1
into someone else’s point of view using a mounted omnidirectional
camera, allowing others to access one’s visual feed and used verbal
communication for collaboration. [Lee et al
.
2017] uses a similar
concept, but also adding the ability to share non-verbal cues in
communication using Mixed Reality visual feedback. Such systems
provide lightweight solutions for direct communication, however,
they do not provide body driven actions towards the remote user.
Body action synchronization and matching systems were also pro-
posed to enable muscle control and mapping, such as [Nishida and
Suzuki 2016]. Although Electro Muscle Stimulation (EMS) based so-
lutions are promising for motor skills learning and control, however,
they still lack the ability to produce continues motion trajectory.
Also, such solutions are not suitable for long use due to the fatigue
caused to the muscles.
In this work, Fusion, we addressed the previous limitations, while
maintaining a high level of portability and accessibility of shared
actions for remote collaboration and eective body communication.
3 FUSION OVERVIEW
Fusion, as shown in Figure 2, consists of an operator and a surrogate
that are spatially separated. The operator uses o-the-shelf HMD
(Oculus CV1) enabling him to access surrogate body. The surrogate
mounts a backpack that consists of three axes robotic head with
stereo vision and binaural audio, and two anthropomorphic robotic
arms (Six Degrees of Freedom) with removable hands shown in
Figure 3. The system is mobile, allowing the surrogate to freely
move and walk while wearing the backpack, enabling outdoor
applications. Fusion, as shown in Figure 1, enables three levels
of communication: (A) Direct actions using humanoid hands, (B)
Enforced postures by holding and moving surrogate hands, and (C)
Induced motions by moving surrogate hands beyond the physical
reach stimulating hand grasping eect.
!"#$%&'&()#*+,%&-.#/,.0
!"1/)2&(++$3"24
5)%6"0786$)/.,9&(++$3"24
!+"$&:./).#&-.#/,.0
Figure 2: Fusion system overview, an operator (left) can ac-
cess a surrogate (right) to control and perceive feedback.
B
A
Figure 3: Two types of hands depending on the intended col-
laboration scenario: (A) humanoid hands for general and
independent collaboration, and (B) mounted on surrogate
wrists for assistive collaboration.
4 EXPERIENCE
At SIGGRAPH’18, attendees can experience Fusion as either of two
roles (two attendees at a time): one surrogate, and other as an oper-
ator. The operator will be capable to access surrogate’s eld of view
and uses robotic arms remotely as his own. The arms are operated
as either collaborative mode (independent arms) or assistive mode
(arm ends are linked to surrogate hands). The surrogate will mount
Fusion as a backpack, and work with the operator at a dierent lo-
cation on cooperative tasks or assistive tasks. A variety of tools will
be provided to interact with. Also, auditing attendees can directly
interact with the surrogate.
ACKNOWLEDGMENTS
This project is supported by JST ERATO JIZAI BODY Project
(JPMJER1701), JST ACCEL Embodied Media Project ( JPMJAC1404),
and JSPS KAKENHI Metabody project (JP15H01701), Japan.
REFERENCES
Shunichi Kasahara and Jun Rekimoto. 2015. JackIn head: immersive visual telepres-
ence system with omnidirectional wearable camera for remote collaboration. In
Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology.
ACM, 217–225.
Gun A Lee, Theophilus Teo, Seungwon Kim, and Mark Billinghurst. 2017. Sharedsphere:
MR collaboration through shared live panorama. In SIGGRAPH Asia 2017 Emerging
Technologies. ACM, 12.
Jun Nishida and Kenji Suzuki. 2016. Biosync: Synchronous kinesthetic experience
among people. In Proceedings of the 2016 CHI Conference Extended Abstracts on
Human Factors in Computing Systems. ACM, 3742–3745.
1Referring to the term used by William Gibson’s in “Neuromancer”.
... More recently, efforts are being made to change the human form altogether through integration with robotic technologies. One example is a class of robotic attachments known as supernumerary robotic limbs (SRLs) which provide individuals with additional limbs (most commonly arms) [14,36,78,79,83,85,94,96]. Other examples include additional fingers [47,82,98] and tails [62,103]. ...
... In particular, the former allows one to exit and enter bodies at will, even existing without any body at times. The Figure 13: Fusion [83] latter, in comparison, allows one to modify the limits of the existing body, deforming it from its natural state. An example of research on the Possessory JIZAI Body can be seen in Transfantome. ...
... An example of the former is "Fusion, " a system that uses a wearable robotic arm and a visual presentation device to allow two distant people to share a workspace and work together cooperatively [83] (Figure 13). An example of the latter is a VR-based system which allows two individuals to share one avatar to perform full body motions [27]. ...
... More recently, efforts are being made to change the human form altogether through integration with robotic technologies. One example is a class of robotic attachments known as supernumerary robotic limbs (SRLs) which provide individuals with additional limbs (most commonly arms) [14,36,78,79,83,85,94,96]. Other examples include additional fingers [47,82,98] and tails [62,103]. ...
... In particular, the former allows one to exit and enter bodies at will, even existing without any body at times. The Figure 13: Fusion [83] latter, in comparison, allows one to modify the limits of the existing body, deforming it from its natural state. An example of research on the Possessory JIZAI Body can be seen in Transfantome. ...
... An example of the former is "Fusion, " a system that uses a wearable robotic arm and a visual presentation device to allow two distant people to share a workspace and work together cooperatively [83] (Figure 13). An example of the latter is a VR-based system which allows two individuals to share one avatar to perform full body motions [27]. ...
Conference Paper
Full-text available
We propose a concept called “JIZAI Body” that allows each person to live the way they wish to live in society. One who acquires a JIZAI Body can (simultaneously) control (or delegate control) of their natural body and extensions of it, both in physical and cyberspace. We begin by describing the JIZAI Body and the associated JIZAI state in more detail.We then provide a review of the literature, focusing on human augmentation and cybernetics, robotics and virtual reality, neuro and cognitive sciences, and the humanities; fields which are necessary for the conception, design, and understanding of the JIZAI Body. We then illustrate the five key aspects of a JIZAI Body through existing works. Finally, we present a series of example scenarios to suggest what a JIZAI society may look like. Overall, we present the JIZAI Body as a preferred state to aspire towards when developing and designing augmented humans.
... The large size of the workspace is achieved with minimal link length by attaching the robot arm to the user's arm. In addition to the above arms, other wearable robot arms that interact with humans have been proposed, such as those by Saraiji et al. [16] and Sasaki et al. [17]. The arm in [16] is a wearable device that integrates a robotic arm and a camera. ...
... In addition to the above arms, other wearable robot arms that interact with humans have been proposed, such as those by Saraiji et al. [16] and Sasaki et al. [17]. The arm in [16] is a wearable device that integrates a robotic arm and a camera. A remote partner controls the robotic arm attached to the user using a head-mounted display and a controller. ...
Article
Full-text available
In a wearable robot arm, the minimum joint configuration and link length must be considered to avoid increasing the burden on the user. This work investigated how the joint configuration, length of arm links, and mounting position of a wearable robot arm affect the cooperative and invasive workspaces of the overall workspace. We considered the joint configurations and link lengths of passive and active joints in our proposed wearable robot arm, which is called the Assist Oriented Arm (AOA). In addition, we comprehensively studied the position of the arm on the user. As a result, three locations around the shoulders and two around the waist were chosen as potential mounting sites. Furthermore, we evaluated the weight burden when the user mounted the wearable robot arm at those positions.
... Mounting the grippers on appropriate interfaces that can be directly controlled by humans offers an excellent alternative for data collection. These interfaces can be extension sticks like the one used for the control of a soft gripper in [16] or even exoskeleton structures, like the full body surrogacy platform presented in [17]. Control interfaces such as a glove equipped with flex sensors can also be used to control a robot hand mounted to the forearm [18]. ...
Conference Paper
Full-text available
Robot grasping and manipulation allow robots to interact with their environments and execute a plethora of complex tasks that require increased dexterity (e.g., open a door, push buttons, collect and transpose objects, etc.). Collecting data of such activities is of paramount importance as it allows roboticists to create new methods and models that will facilitate the execution of sophisticated tasks. In this paper, we propose new wearable, lightweight, low-cost human machine interfaces that improve the efficiency of the data collection process for both robotic grasping and manipulation by offering intuitive and simplified control of the employed robotic grippers and hands. In particular, two different types of interfaces are proposed: i) a handle-based forearm stabilized interface that uses a waist-linkage system to provide weight support for bulky and heavy robotic end-effectors and ii) a palm-mounted interface that can accommodate smaller and lightweight grippers and hands, offering more agility in the control and positioning of these devices. Both interfaces are equipped with appropriate sliders, joysticks, and buttons that facilitate the control of the multiple degrees of freedom of the employed end-effectors and appropriate cameras that allow for object detection, identification, and object pose estimation.
Article
With the increase in online interaction between distant people, wearable avatar robots are expected to be used in a wide range of daily situations. Soft robotics is highly applicable as a method to achieve this. In this study, we define the design requirements for the daily use of wearable soft avatar robots based on design surveys and cross-field academic research. In addition, we implement prototypes using an inflatable robot and summarize the future issues.
Thesis
Full-text available
Augmenting humans with robotic appendages have long been envisioned in sci-fi and pop-media. Recent advances in robotics have also demonstrated prototypes that could satisfy such vision. However, existing research literatures have several limitations. First, most works are very focused on specific domains, such as industrial applications or rehabilitation. While these tasks are useful, daily usage constitutes many other use cases that are more relevant to everyday life. Secondly, knowledge from wearable systems and human-computer interactions research indicate that daily worn devices constitutes additional set of design requirements and challenges, such as wearability and ergonomics, social and user acceptability and user experience design. These challenges were not addressed in any surveyed related literatures. To achieve the intended contributions, this dissertation focuses on the serpentine morphology (snake-like) as a wearable form factor to realize this form of wearables. The serpentine morphology was chosen as it has established flexibility and versatility in various application domains. Accordingly, its versatility is also demonstrated through the four case studies that were developed and evaluated, further demonstrating its potential as robust wearable form factor. This dissertation is the first to examine wearable serpentine-shaped robotic appendages for everyday use. This dissertation addresses the mentioned research challenges by conducting nine evaluations, developing four case studies, and analyzing the results of these processes in order to make four main contributions. The first contribution tackles the problem of understanding the contextual factors in the usage domains, design requirements and expectations of daily worn serpentine-shaped robotic appendages. The conducted work addresses the fact that the usage domain, requirements and expectations of this form of wearable systems are not formally investigated or identified in surveyed works. This contribution is addressed by conducting two evaluations that addressed daily usability expectations, where the resulting use cases are analyzed, structured and classified. The resulting use case distributions enable identifying various domains of daily interaction expectations. This contribution is significant; it is the first to provide insights about the interaction expectations, which in turn forms a broad understanding of the main usage expectations and potential challenges of serpentine-shaped robotic appendages. The provided data, analysis and insights contributes with a comprehensive resource from which design considerations, implementation methods and evaluation criterion can be derived. The second contribution focuses on identifying factors affecting acceptability on a personal and a societal scale. Previous works within wearable systems provide a number of insights to address various acceptability requirements, yet identified factors are applicable to standard wearable systems. Serpentine-shaped robotic appendages present new challenges for public and personal acceptability that have not been previously identified. Accordingly, this dissertation contributes with new knowledge about the main factors affecting personal and social acceptability, which are extracted through a series of case studies and evaluations results. The significance of this contribution lies in the presented insights and methodologies on which social and personal acceptance are addressed, where these insights contributes to addressing social challenges as well as ensuring user adoption. Previous efforts within the area have focus on functional efficiency and technical novelty, therefore, there is a dearth of works that tackled essential social and personal acceptability challenges that would equally affect a wearable’s daily use. The third contribution addresses a critical problem; how can we design user interaction experiences for serpentine-shaped robotic appendages? The multipurpose nature of these robots’ present challenges that were not addressed in single purpose wearable systems. Therefore, insights are extracted from the design, development and evaluations of the case studies, where they are structured and presented. These insights provide valuable considerations and methodologies for developing multipurpose user experiences that target daily use. Previous research efforts in multipurpose wearable systems have presented various interaction possibilities, yet these works do not address the mean of enabling multipurpose user experiences. Therefore, this contribution constitutes design insights about the design methodology of cohesive multipurpose user experiences, as well as a classification and embodiment of novel digital experiences that were not previously investigated in related research literatures. The fourth contribution comprise an effort to structure gathered insights from the design, implementation and evaluation procedures of the case studies, by providing a multi-dimensional set of essential user-centered design considerations for constructing serpentine-shaped robotic appendages. The design dimensions include four main sub-domains, which are multipurpose use, interaction design, wearability and ergonomics, and unobtrusiveness and social acceptability. These design considerations provide both design guidelines and implementation methods based on the culmination extracted insights from case studies and their evaluations. We conclude with a discussion of limitations and future work directions, emphasizing the means to advance this domain by focusing on various interleaved factors, such as technical, interaction, or social oriented research challenges and opportunities.
Conference Paper
Sharing live 360 panorama video is becoming popular on social networking platforms. This research focuses on further improving shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. In this demonstration we present SharedSphere, an MR remote collaboration system which not only allows sharing of a live captured immersive panorama, but also allows enriched collaboration through sharing non-verbal communication cues, such as view awareness and gestures. We describe the prototype system and the experience that attendees of Siggraph Asia 2017 will have using the system.
Conference Paper
This paper presents synchronous kinesthetic interaction among people. The users are able to perceive muscle activity bi-directionally, such as muscle contraction or rigidity of joints, through somatosensory channels in a realistic manner. We have developed a wearable haptic I/O device, named bioSync, that equips a developed electrode system for enabling the same electrodes to perform biosignal measurement and stimulation. We conducted a pilot study to evaluate the optimal forms of a feeding spoon for people with neuromuscular disorders by reproducing muscle tremors in healthy people. Potential scenarios for achieving interactive rehabilitations and sports training are also described. It is essential for both the trainers and the learners to perceive not only the physical bodily motions but also the muscle activity.
Conference Paper
Remote collaboration to share abilities over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system with an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the scene independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of the system's motion decoupling. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.