Fig 5 - uploaded by Shuichi Nishio
Content may be subject to copyright.
Replicating human motions with the android 

Replicating human motions with the android 

Source publication
Article
Full-text available
If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspe...

Context in source publication

Context 1
... soft. After that modification, a plaster full-body mold was made from the modified clay model, and then a silicon full-body model was made from that plaster mold. This silicon model is maintained as a master model. Using this master model, silicon skin for the entire body is made. The thickness of the silicon skin is 5 mm in our current version. The mechanical parts, motors and sensors are covered with polyurethane and the produced silicon skin. As shown in the figure, the details are so finely represented that they cannot be distinguished from those of human beings in photographs. Our current technology for replicating the human figure as an android has reached a fine degree of reality. It is, however, still not perfect. One issue is the detail of the wetness of the eyes. The eye is the body part to which human observers become most sensitive. When confronted with a human face, a person first looks at the eyes. Although the android has eye-related mechanisms, such as blinking or making saccade movements, and the eyeballs are near-perfect copies of those of a human, we still become aware of the differences from a real human. Actually, making the wet surface of the eye and replicating the outer corners using silicone are difficult tasks, so further improvements are needed for this part. Other issues are the flexibility and robustness of the skin material. The silicone used in the current manufacturing process is sufficient for representing the texture of the skin; however, it loses flexibility after one or two years, and its elasticity is insufficient for adapting to large joint movements. Very human-like movement is another important factor in developing androids. Even if androids look indistinguishable from humans as static figures, without appropriate movements, they can be easily identified as artificial. To achieve highly human-like movement, we found that a child android was too small for embedding the number of actuators required, which led to the development of an adult android. The right half of figure 3 shows our latest adult android. This android, named Repliee Q2, contains 42 pneumatic (air) actuators in the upper torso. The positions of the actuators were determined by analyzing real human movements using a precise 3D motion tracker. With these actuators, both unconscious movements, such as breathing in the chest, and conscious large movements, such as head or arm movements, can be generated. Furthermore, the android is able to generate the facial expressions that are important for interacting with humans. Figure 4 shows some of the facial expressions generated by the android. In order to generate a smooth, human-like expression, 13 out of 42 actuators are embedded in the head. We decided to use pneumatic actuators for the androids, instead of the DC motors used in most robots. The use of a pneumatic actuator provides several benefits. First, it is very silent, much closer to human-produced sound. DC servomotors require reduction gears, which generate nonhuman-like, very robot-like noise. Second, the reaction of the android to external force becomes very natural with the pneumatic damper. If we use DC servomotors with reduction gears, sophisticated compliance control is required to obtain the same effect. This is also important for ensuring safety in interactions with the android. On the other hand, the weakness of the pneumatic actuators is that they require a large and powerful air compressor. Due to this requirement, the current android cannot walk. For wide applications, we need to develop new electric actuators that have similar specs to the pneumatic actuators. The next issue is how to control the 42 air servo actuators used to achieve very humanlike movements. The simplest approach is to directly send angular information to each joint. However, as the number of actuators in the android is relatively large, this takes a long time. Another difficulty is that the skin movement does not simply correspond to the joint movement. For example, the android has more than five actuators around the shoulder for generating human-like shoulder movements, with the skin moving and stretching according to the actuator motions. Already, we have developed methods such as using Perlin noise 10 to generate smooth movements or using a neural network to obtain mapping between the skin surface and actuator movements. There still remain some issues, such as the limited speed of android movement due to the nature of the pneumatic damper. To achieve quicker and more human-like behavior, speed and torque controls are required in our future study. After obtaining an efficient method for controlling the android, the next step is the implementation of human-like motions. A straightforward approach to this challenge is to imitate real human motions in synchronization with the android’s master. By attaching 3D motion tracker markers on both the android and the master, the android can automatically follow human motions (figure 5). This work is still in progress, but interesting issues have arisen with respect to this kind of imitation. Imitation by the android means representation of complicated human shapes and motions in the parameter space of the actuators. Although the android has a relatively large number of actuators compared to other robots, the number is still far smaller compared with a human. Thus, the effect of data-size reduction is significant. By carefully examining this parameter space and mapping, we may find important properties of human body movements. More concretely, we expect to develop a hierarchical representation of human body movements that consist of two or more layers, such as small unconscious movements and large conscious movements. With this hierarchical representation, we can expect to achieve more flexibility in android behavior control. Androids require human-like perceptual abilities in addition to human-like appearance and movements. This problem has been tackled in the fields of computer vision and pattern recognition under rather controlled environments. However, the problem becomes extremely difficult when applied to robots in real-world situations, since vision and audition become unstable and noisy. Ubiquitous/distributed sensor systems solve this problem. The idea is to recognize the environment and human activities by using many distributed cameras, microphones, infrared motion sensors, floor sensors and ID tag readers in the environment (figure 6). We have developed distributed vision systems 11 and distributed audition systems 12 in our previous work. To solve the present problem, theses developments must be integrated and extended. The omnidirectional cameras observe humans from multiple viewing points and robustly recognize their behaviors 13 . The microphones catch the human voice by forming virtual sound beams. The floor sensors, which cover the entire space, reliably detect the footprints of humans. The only sensors that should be installed on the robot are skins sensors. Soft and sensitive skin sensors are important particularly for interactive robots. However, there has not been much work in this area in previous robotics. We are now focusing on its importance in developing original sensors. Our sensors are made by combining silicone skin and Piezo films (figure 7). This sensor detects pressure by bending the Piezo films. Furthermore, it can detect very near human presence from static electricity by increasing the sensitivity. That is, it can perceive a signal that a human being is there. These technologies for very human-like appearance, behavior and perception enable us to develop feasible androids. These androids have undergone various cognitive tests, but this work is still limited. The bottleneck is long-term conversation in interaction. Unfortunately, current artificial intelligence (AI) technology for developing human-like brains has only a limited ability, and thus we cannot expect human-like conversation with robots. When we meet humanoid robots, we usually expect to have human-like conversation with them. However, the technology is very far behind this expectation. Progress in AI takes time, and this is actually our final goal in robotics. In order to arrive at this final goal, we need to use the technologies available today and, moreover, truly understand what a human is. Our solution to this problem is to integrate android and teleoperation technologies. We have developed Geminoid, a new category of robot, to overcome the bottleneck issue. We coined “geminoid” from the Latin “geminus,” meaning “twin” or “double,” and added “oides,” which indicates “similarity” or being a twin. As the name suggests, a geminoid is a robot that will work as a duplicate of an existing person. It appears and behaves as a person and is connected to the person by a computer network. Geminoids extend the applicable field of android science. Androids are designed for studying human nature in general. With geminoids, we can study such personal aspects as presence or personality traits, tracing their origins and implementation into robots. Figure 8 shows the robotic part of HI-1, the first geminoid prototype. Geminoids have the following capabilities: The appearance of a geminoid is based on an existing person and does not depend on the imagination of designers. Its movements can be made or evaluated simply by referring to the original person. The existence of a real person analogous to the robot enables easy comparison studies. Moreover, if a researcher is used as the original, we can expect that individual to offer meaningful insights into the experiments, which are especially important at the very first stage of a new field of study when beginning from established research methodologies. Since geminoids are equipped with teleoperation functionality, they are not only driven by an autonomous program. By introducing manual control, the limitations in current AI technologies can be ...

Citations

... Human beings are accustomed to understanding external objects with human features and behaviors (H. Ishiguro & Nishio, 2007) and have tendencies to anthropomorphize objects in the environment (Reeves & Nass, 1998). ...
Article
Objectives This study examined the published works related to healthcare robotics for older people using the attributes of health, nursing, and the human-computer interaction framework. Design An integrative literature review. Methods A search strategy captured 55 eligible articles from databases (CINAHL, Embase, IEEE Xplore, and PubMed) and hand-searching approaches. Bibliometric and content analyses grounded on the health and nursing attributes and human-computer interaction framework were performed using MAXQDA. Finally, results were verified using critical friend feedback by a second reviewer. Results Most articles were from multiple authorship, published in non-nursing journals, and originating from developed economies. They primarily focused on applying healthcare robots in practice settings, physical health, and communication tasks. Using the human-computer interaction framework, it was found that older adults frequently served as the primary users while nurses, healthcare providers, and researchers functioned as secondary users and operators. Research articles focused on the usability, functionality, and acceptability of robotic systems. At the same time, theoretical papers explored the frameworks and the value of empathy and emotion in robots, human-computer interaction and nursing models and theories supporting healthcare practice, and gerontechnology. Current robotic systems are less anthropomorphic, operated through real-time direct and supervisory inputs, and mainly equipped with visual and auditory sensors and actuators with limited capability in performing health assessments. Conclusion Results communicate the need for technological competency among nurses, advancements in increasing healthcare robot humanness, and the importance of conscientious efforts from an interdisciplinary research team in improving robotic system usability and utility for the care of older adults.
... Other studies used confederates as interaction partners and tested live emotional interactions (e.g., Vaughan and Lanzetta, 1980), but this strategy can lack rigorous control of confederates' behaviors (Bavelas and Healing, 2013;Kuhlen and Brennan, 2013). Androids-that is, humanoid robots that exhibit appearances and behaviors that closely resemble those of humans (Ishiguro and Nishio, 2007)-could become an important tool for testing live face-to-face emotional interactions with rigorous control. ...
... It has 35 actuators: 29 for facial muscle actions, 3 for head movement (roll, pitch, and yaw rotation), and 3 for eyeball control (pan movements of the individual eyeballs and tilt movements of both eyeballs). The facial and head movements are driven by pneumatic (air) actuators, which create safe, silent, and human-like motions (Ishiguro and Nishio, 2007;Minato et al., 2007). The pneumatic actuators are controlled by an air pressure control valve. ...
... The construction of effective android software and hardware requires that the mechanisms of psychological theories be elucidated. We expect that this constructivist approach to developing and testing androids (Ishiguro and Nishio, 2007;Minato et al., 2007) will be a useful methodology for understanding the psychological mechanisms underlying human emotional interaction. ...
Article
Full-text available
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids’ facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola’s emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
... The applications of robots are not limited to industrial manufacturing, and the rapid advancement of technology has been accompanied by an increase in the development of social robots, which are designed to socially interact with humans, for instance, by contributing to healthcare for older people (e.g., Paro), teaching autistic children (e.g., NAO), and acting as guides in public places (Broadbent, 2017;Dahl & Boulos, 2013;Gates, 2007;Han et al., 2015;Sabelli & Kanda, 2016). To make the robots around us acceptable and enjoyable and to enhance the quality of human-robot interaction (HRI), robots are built to have a humanlike appearance (e.g., android robots; Broadbent et al., 2013;Ishiguro & Nishio, 2007;Rosenthal-von der Pütten & Krämer, 2014). However, does an increasingly humanlike appearance lead to increased likeability of robots? ...
... Robots with these functions appear to be human-minded and to have the mental capacity to act, plan and exert self-control (i.e., agency) as well as to feel and sense (i.e., experience; Appel et al., 2016;Broadbent, 2017;Gray & Wegner, 2012;Stafford et al., 2014). To promote the acceptability of robots, the human-likeness of a robot's appearance is just one consideration, and engineers have started to consider humanlike mental capacities as another avenue (Fong et al., 2003;Ishiguro & Nishio, 2007). Hence, in recent years, academic interest has started to shift toward a new facet of humanlike robots, namely, the mental capacities of artificial systems (Gray & Wegner, 2012;Stein & Ohler, 2017). ...
Article
Full-text available
Robots are being used to socially interact with humans. To enhance the quality of human-robot interaction, engineers aim to build robots with both a humanlike appearance and high mental capacity, but there is a lack of empirical evidence regarding how these two characteristics jointly affect people’s emotional response to robots. The current two experiments (each N = 80) presented robots with either a mechanical or humanlike appearance, with mental capacities operationalized as low or high, and with either self-oriented mentalization to mainly concentrate on the robot itself or other-oriented mentalization to read others’ minds. It was found that when the robots had a humanlike appearance, they were more dislikeable than when they had a mechanical appearance, replicating the uncanny valley effect for appearance. Importantly, given a humanlike appearance, robots with high mental ability elicited stronger dislike than those with low mental ability, showing an uncanny valley effect for mind, but this difference was absent for robots with a mechanical appearance. In addition, this effect was limited to robots with self-oriented mentalization ability and did not extend to robots with other-oriented mentalization ability. Hence, the exterior appearance and interior mental capacity of robots interact to influence people’s emotional reaction to them, and the uncanny valley as it pertains to the mind depends on the robot’s appearance in addition to its mental ability. This implies that social robots with humanlike appearances should be designed with obvious other-directed social abilities to make them more likeable.
... It is still only a hypothesis and the amount of empirical proof remains scarce [70,75], so it is unclear whether or not this hypothesis holds, simply because a humanoid robot that perfectly looks and behaves as a human does not yet exist. Regardless, there has been struggle to overcome this uncanny valley [76], suggesting that appearance does matter (see Philip K. Dick android [77], Geminoid HI-1 [78] or HRP-4C [79] humanoid robots). ...
... Humanoid robot heads appearance is considered a major concern by some, and there has been an emergence of android robot heads in humanoid robots [28,71,[76][77][78][79]. Androids have been defined as humanoid robots with an appearance that closely resembles that of humans, possessing traits such as artificial skin and hair (see [80] for an example), and capable of linguistic and para-linguistic communication [76]. ...
... Most of these expressive face heads android faces are intended to have the appearance of adults, although there are some examples of childandroids like Barthoc Jr. [126] and CB 2 [86]. One very interesting feature of Repliee Q1 and Geminoid HI-1 androids [78] is the implementation of micro-motions. We consider micro-motions to be a type of appearance-enhancing design parameter because it is something that is constantly being displayed (breathing motion or shoulder movements). ...
Article
We conducted a literature review on sensor heads for humanoid robots. A strong case is made on topics involved in human robot interaction. Having found that vision is the most abundant perception system among sensor heads for humanoid robots, we included a review of control techniques for humanoid active vision. We provide historical insight and inform on current robotic head design and applications. Information is chronologically organized whenever possible and exposes trends in control techniques, mechanical design, periodical advances and overall philosophy. We found that there are two main types of humanoid robot heads which we propose to classify as either non-expressive face robot heads or expressive face robot heads. We expose their respective characteristics and provide some ideas on design and vision control considerations for humanoid robot heads involved in human robot interactions.
... [32] In robotics, many different techniques are used to create tactile sensors that mimic and transcend the subtle pressure sensing properties of natural skins. [33] A variety of sensing technologies are derived from the exploration of different conduction principles and materials, such as capacitive, [34,35] piezoresistive, [36][37][38][39] optical, [40,41] piezoelectric, [42,43] magnetic, [44,45] multicomponent, [46,47] and EIT. [27,48,49] In general, different types of multilayer sensors are used to mimic the sensing capabilities of human skins. ...
... Piezoelectric [42,43] Stress (strain) polarization ...
Article
Full-text available
Electrical Impedance Tomography (EIT) is a non‐invasive measurement technique that estimates the internal resistivity distribution based on the boundary voltage‐current data measured from the surface of the conductor. If a thin stretchable soft material with a certain piezoresistive property is used as the electrical conductor, EIT has the ability to reconstruct the position where the resistivity changes due to the inside pressure contact, so that a large‐scale artificial sensitive skin is provided for robotics. First, the different conduction principles and material types of artificial sensitive skins are discussed, which is next followed by the different driving modes and image reconstruction techniques. Then, details on how EIT is used for robotic skin applications are described. Finally, the development trends and future potentials of EIT‐based robotic skins are expounded. This article is protected by copyright. All rights reserved.
... Some of these requirements become relevant depending on the application, and they increase the performance complexity. According to Ishiguro and Nishio, 84 HRI researchers have been neglected the appearance of a robot prioritizing behavior over appearance. There is a shift in recent years to build more humanoid robots who interact with natural communication with humans. ...
Article
Full-text available
Human activitiy recognition deals with the integration of sensing and reasoning aiming to understand better people’s actions. Moreover, it plays an important role in human interaction, human–robot interaction, and brain–computer interaction. When these approaches have to be developed, different efforts from signal processing and artificial intelligence are considered. In that sense, this article aims to present a concise review of signal processing in human activitiy recognition systems and describe two examples and applications both in human activity recognition and robotics: human–robot interaction and socialization, and imitation learning in robotics. In addition, it presents ideas and trends in the context of human activity recognition for human–robot interaction that are important when processing signals within that systems.
... Humans spontaneously attribute mental states and emotional motivations to objects as rudimentary as moving geometric shapes (Heider and Simmel 1944). As sophisticated robots and other machine agents become increasingly interwoven into modern life (Ishiguro and Nishio 2007;Coradeschi et al. 2006), it is imperative that we understand how humans conceptualize the machine partners with which we collaborate. To the extent that people tend to intuitively ascribe human qualities to machines (Knijnenburg and Willemsen 2016), and that threat mobilizes functional shifts in social perceptions on detecting cues of hazard (Holbrook 2016), threat may be expected to bias perceptions of the mental attributes and personhood of robots. ...
Article
Convergent lines of evidence indicate that anthropomorphic robots are represented using neurocognitive mechanisms typically employed in social reasoning about other people. Relatedly, a growing literature documents that contexts of threat can exacerbate coalitional biases in social perceptions. Integrating these research programs, the present studies test whether cues of violent intergroup conflict modulate perceptions of the intelligence, emotional experience, or overall personhood of robots. In Studies 1 and 2, participants evaluated a large, bipedal all-terrain robot; in Study 3, participants evaluated a small, social robot with humanlike facial and vocal characteristics. Across all studies, cues of violent conflict caused significant decreases in perceived robotic personhood, and these shifts were mediated by parallel reductions in emotional connection with the robot (with no significant effects of threat on attributions of intelligence/skill). In addition, in Study 2, participants in the conflict condition estimated the large bipedal robot to be less effective in military combat, and this difference was mediated by the reduction in perceived robotic personhood. These results are discussed as they motivate future investigation into the links among threat, coalitional bias and human–robot interaction.
... The improvement of immersion and telepresence experience have also been possible by the rapid progress achieved in robot and sensor technology [22], [23], [24]. Particularly, humanoid robots, which try to mimic the human body structure, movements and sensory capabilities, offer a more natural platform for remote control, exploration and interaction with humans and the surrounding environment [25], [26], [27]. Some works have shown that robot platforms that include a degree of anthropomorphic form and function, make users feel a stronger presence in a remote environment, but also provide powerful physical and social features to engage humans in interaction [28], [29]. ...
Article
The idea of being present in a remote location has inspired researchers to develop robotic devices that make humans to experience the feeling of telepresence. These devices need of multiple sensory feedback to provide a more realistic telepresence experience. In this work, we develop a wearable interface for immersion and telepresence that provides to human with the capability of both to receive multisensory feedback from vision, touch and audio and to remotely control a robot platform. Multimodal feedback from a remote environment is based on the integration of sensor technologies coupled to the sensory system of the robot platform. Remote control of the robot is achieved by a modularised architecture, which allows to visually explore the remote environment. We validated our work with multiple experiments where participants, located at different venues, were able to successfully control the robot platform while visually exploring, touching and listening a remote environment. In our experiments we used two different robotic platforms: the iCub humanoid robot and the Pioneer LX mobile robot. These experiments show that our wearable interface is comfortable, easy to use and adaptable to different robotic platforms. Furthermore, we observed that our approach allows humans to experience a vivid feeling of being present in a remote environment.
... Some robots look humanlike; the androids built by Ishiguro & Nishio (2007) are examples. An android is a robot that is highly anthropomorphic in both appearance and behavior. ...
... Android science is an interdisciplinary field in which engineers build humanlike robots and cognitive scientists test these robots with humans to discover more about human nature. Using this feedback loop, Ishiguro's ultimate goal is to build androids that can take part in humanlike conversations (Ishiguro & Nishio 2007). Because that is currently impossible, he is building and studying geminoids. ...
Article
In movies, robots are often extremely humanlike. Although these robots are not yet reality, robots are currently being used in health care, education, and business. Robots provide benefits such as relieving loneliness and enabling communication. Engineers are trying to build robots that look and behave like humans and thus need comprehensive knowledge not only of technology but also of human cognition, emotion, and behavior. This need is driving engineers to study human behavior toward other humans and toward robots, leading to greater understanding of how humans think, feel, and behave in these contexts, including our tendencies for mindless social behaviors, anthropomorphism, uncanny feelings toward robots, and the formation of emotional attachments. However, in considering the increased use of robots, many people have concerns about deception, privacy, job loss, safety, and the loss of human relationships. Human-robot interaction is a fascinating field and one in which psychologists have much to contribute, both to the development of robots and to the study of human behavior. Expected final online publication date for the Annual Review of Psychology Volume 68 is January 03, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
... In a similar line of research, we previously reported an illusion of body ownership transfer (BOT) for the operators of a very humanlike android robot [22][23][24] . We first demonstrated that the operators experienced an illusion of owning the robot's hand, when they moved their own hand and, inside a head-mounted display (HMD) watched images of the robot's hand copy their actions in a perfectly synchronized manner 23 . ...
Article
Full-text available
Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.