Fig 5 - uploaded by Shuichi Nishio
Content may be subject to copyright.
Source publication
If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspe...
Context in source publication
Context 1
... soft. After that modification, a plaster full-body mold was made from the modified clay model, and then a silicon full-body model was made from that plaster mold. This silicon model is maintained as a master model. Using this master model, silicon skin for the entire body is made. The thickness of the silicon skin is 5 mm in our current version. The mechanical parts, motors and sensors are covered with polyurethane and the produced silicon skin. As shown in the figure, the details are so finely represented that they cannot be distinguished from those of human beings in photographs. Our current technology for replicating the human figure as an android has reached a fine degree of reality. It is, however, still not perfect. One issue is the detail of the wetness of the eyes. The eye is the body part to which human observers become most sensitive. When confronted with a human face, a person first looks at the eyes. Although the android has eye-related mechanisms, such as blinking or making saccade movements, and the eyeballs are near-perfect copies of those of a human, we still become aware of the differences from a real human. Actually, making the wet surface of the eye and replicating the outer corners using silicone are difficult tasks, so further improvements are needed for this part. Other issues are the flexibility and robustness of the skin material. The silicone used in the current manufacturing process is sufficient for representing the texture of the skin; however, it loses flexibility after one or two years, and its elasticity is insufficient for adapting to large joint movements. Very human-like movement is another important factor in developing androids. Even if androids look indistinguishable from humans as static figures, without appropriate movements, they can be easily identified as artificial. To achieve highly human-like movement, we found that a child android was too small for embedding the number of actuators required, which led to the development of an adult android. The right half of figure 3 shows our latest adult android. This android, named Repliee Q2, contains 42 pneumatic (air) actuators in the upper torso. The positions of the actuators were determined by analyzing real human movements using a precise 3D motion tracker. With these actuators, both unconscious movements, such as breathing in the chest, and conscious large movements, such as head or arm movements, can be generated. Furthermore, the android is able to generate the facial expressions that are important for interacting with humans. Figure 4 shows some of the facial expressions generated by the android. In order to generate a smooth, human-like expression, 13 out of 42 actuators are embedded in the head. We decided to use pneumatic actuators for the androids, instead of the DC motors used in most robots. The use of a pneumatic actuator provides several benefits. First, it is very silent, much closer to human-produced sound. DC servomotors require reduction gears, which generate nonhuman-like, very robot-like noise. Second, the reaction of the android to external force becomes very natural with the pneumatic damper. If we use DC servomotors with reduction gears, sophisticated compliance control is required to obtain the same effect. This is also important for ensuring safety in interactions with the android. On the other hand, the weakness of the pneumatic actuators is that they require a large and powerful air compressor. Due to this requirement, the current android cannot walk. For wide applications, we need to develop new electric actuators that have similar specs to the pneumatic actuators. The next issue is how to control the 42 air servo actuators used to achieve very humanlike movements. The simplest approach is to directly send angular information to each joint. However, as the number of actuators in the android is relatively large, this takes a long time. Another difficulty is that the skin movement does not simply correspond to the joint movement. For example, the android has more than five actuators around the shoulder for generating human-like shoulder movements, with the skin moving and stretching according to the actuator motions. Already, we have developed methods such as using Perlin noise 10 to generate smooth movements or using a neural network to obtain mapping between the skin surface and actuator movements. There still remain some issues, such as the limited speed of android movement due to the nature of the pneumatic damper. To achieve quicker and more human-like behavior, speed and torque controls are required in our future study. After obtaining an efficient method for controlling the android, the next step is the implementation of human-like motions. A straightforward approach to this challenge is to imitate real human motions in synchronization with the android’s master. By attaching 3D motion tracker markers on both the android and the master, the android can automatically follow human motions (figure 5). This work is still in progress, but interesting issues have arisen with respect to this kind of imitation. Imitation by the android means representation of complicated human shapes and motions in the parameter space of the actuators. Although the android has a relatively large number of actuators compared to other robots, the number is still far smaller compared with a human. Thus, the effect of data-size reduction is significant. By carefully examining this parameter space and mapping, we may find important properties of human body movements. More concretely, we expect to develop a hierarchical representation of human body movements that consist of two or more layers, such as small unconscious movements and large conscious movements. With this hierarchical representation, we can expect to achieve more flexibility in android behavior control. Androids require human-like perceptual abilities in addition to human-like appearance and movements. This problem has been tackled in the fields of computer vision and pattern recognition under rather controlled environments. However, the problem becomes extremely difficult when applied to robots in real-world situations, since vision and audition become unstable and noisy. Ubiquitous/distributed sensor systems solve this problem. The idea is to recognize the environment and human activities by using many distributed cameras, microphones, infrared motion sensors, floor sensors and ID tag readers in the environment (figure 6). We have developed distributed vision systems 11 and distributed audition systems 12 in our previous work. To solve the present problem, theses developments must be integrated and extended. The omnidirectional cameras observe humans from multiple viewing points and robustly recognize their behaviors 13 . The microphones catch the human voice by forming virtual sound beams. The floor sensors, which cover the entire space, reliably detect the footprints of humans. The only sensors that should be installed on the robot are skins sensors. Soft and sensitive skin sensors are important particularly for interactive robots. However, there has not been much work in this area in previous robotics. We are now focusing on its importance in developing original sensors. Our sensors are made by combining silicone skin and Piezo films (figure 7). This sensor detects pressure by bending the Piezo films. Furthermore, it can detect very near human presence from static electricity by increasing the sensitivity. That is, it can perceive a signal that a human being is there. These technologies for very human-like appearance, behavior and perception enable us to develop feasible androids. These androids have undergone various cognitive tests, but this work is still limited. The bottleneck is long-term conversation in interaction. Unfortunately, current artificial intelligence (AI) technology for developing human-like brains has only a limited ability, and thus we cannot expect human-like conversation with robots. When we meet humanoid robots, we usually expect to have human-like conversation with them. However, the technology is very far behind this expectation. Progress in AI takes time, and this is actually our final goal in robotics. In order to arrive at this final goal, we need to use the technologies available today and, moreover, truly understand what a human is. Our solution to this problem is to integrate android and teleoperation technologies. We have developed Geminoid, a new category of robot, to overcome the bottleneck issue. We coined “geminoid” from the Latin “geminus,” meaning “twin” or “double,” and added “oides,” which indicates “similarity” or being a twin. As the name suggests, a geminoid is a robot that will work as a duplicate of an existing person. It appears and behaves as a person and is connected to the person by a computer network. Geminoids extend the applicable field of android science. Androids are designed for studying human nature in general. With geminoids, we can study such personal aspects as presence or personality traits, tracing their origins and implementation into robots. Figure 8 shows the robotic part of HI-1, the first geminoid prototype. Geminoids have the following capabilities: The appearance of a geminoid is based on an existing person and does not depend on the imagination of designers. Its movements can be made or evaluated simply by referring to the original person. The existence of a real person analogous to the robot enables easy comparison studies. Moreover, if a researcher is used as the original, we can expect that individual to offer meaningful insights into the experiments, which are especially important at the very first stage of a new field of study when beginning from established research methodologies. Since geminoids are equipped with teleoperation functionality, they are not only driven by an autonomous program. By introducing manual control, the limitations in current AI technologies can be ...
Citations
... One reason for the lack of investigation of robot eyes may be that most humanoid robots do not have human-like eyes. Robots with human-like movements and appearance are called androids 36 . We hypothesized that android eyes similar in appearance to human eyes could induce attention orienting in humans based on mental state attribution. ...
The eyes play a special role in human communications. Previous psychological studies have reported reflexive attention orienting in response to another individual’s eyes during live interactions. Although robots are expected to collaborate with humans in various social situations, it remains unclear whether robot eyes have the potential to trigger attention orienting similarly to human eyes, specifically based on mental attribution. We investigated this issue in a series of experiments using a live gaze-cueing paradigm with an android. In Experiment 1, the non-predictive cue was the eyes and head of an android placed in front of human participants. Light-emitting diodes in the periphery served as target signals. The reaction times (RTs) required to localize the valid cued targets were faster than those for invalid cued targets for both types of cues. In Experiment 2, the gaze direction of the android eyes changed before the peripheral target lights appeared with or without barriers that made the targets non-visible, such that the android did not attend to them. The RTs were faster for validly cued targets only when there were no barriers. In Experiment 3, the targets were changed from lights to sounds, which the android could attend to even in the presence of barriers. The RTs to the target sounds were faster with valid cues, irrespective of the presence of barriers. These results suggest that android eyes may automatically induce attention orienting in humans based on mental state attribution.
... As argued in [49], [50], the degree of human likeness can also affect moral judgements. The so-called android science research program [51], [52], [53] is based on the assumption that robots endowed with high degrees of human likeness may be useful to study human social cognition and the dynamic of human-robot coordination. This is only a small part of the literature showing the importance of human likeness in the study of human-robot interaction and the design of interactive robots (see also [54] on this topic). ...
It has often been argued that people can attribute mental states to robots without making any ontological commitments to the reality of those states. But what does it mean to 'attribute' a mental state to a robot, and what is an 'ontological commitment'? It will be argued that, on a plausible interpretation of these two notions, it is not clear how mental state attribution can occur without any ontological commitment. Taking inspiration from the philosophical debate on scientific realism, a provisional taxonomy of folk-ontological stances towards robots will also be identified, corresponding to different ways of understanding robotic minds. They include realism, non-realism, eliminativism, reductionism, fictionalism and agnosticism. Instrumentalism will also be discussed and presented as a folk-epistemological stance. In the last part of the article it will be argued that people's folk-ontological stances towards robots and humans can influence their perception of the human-likeness of robots. The analysis carried out here can be seen as encouraging a 'folk-ontological turn' in human-robot interaction research, aimed at explicitly determining what beliefs people have about the reality of robot minds.
... There has also been some research into embedding the electronic parts of the skin. Ishiguro et al. [104] developed a tactile sensor consisting of a silicone skin and piezoelectric film, which can detect pressure by bending the piezoelectric film. It can even detect whether someone is nearby with the help of static electricity (see Figure 7(d)). ...
The humanoid robot head plays an important role in the emotional expression of human-robot interaction (HRI). They are emerging in industrial manufacturing, business reception, entertainment, teaching assistance, and tour guides. In recent years, significant progress has been made in the field of humanoid robots. Nevertheless, there is still a lack of humanoid robots that can interact with humans naturally and comfortably. This review comprises a comprehensive survey of state-of-the-art technologies for humanoid robot heads over the last three decades, which covers the aspects of mechanical structures, actuators and sensors, anthropomorphic behavior control, emotional expression, and human-robot interaction. Finally, the current challenges and possible future directions are discussed.
... This article analyzes human-android interactions as an emerging aspect of android science within an interdisciplinary research framework that originates in Japan (MacDorman and Ishiguro 2006;Ishiguro and Nishio 2007). The android science developed by professor and roboticist, Hiroshi Ishiguro, argues that when we engage with an android robot that possesses human-like and animate characteristics, we react and behave as if it is human. ...
... In the following profoundly significant piece of self-refection, Prof. Ishiguro describes the interaction between him and an android (built and programed as his twin in terms of appearance and behavior) (Ishiguro and Nishio 2007): ...
Using anthropological theory, this paper examines human–android interactions (HAI) as an emerging aspect of android science. These interactions are described in terms of adaptive learning (which is largely subconscious). This article is based on the observations reported and supplementary data from two studies that took place in Japan with a teleoperated android robot called Telenoid in the socialization of school children and older adults. We argue that interacting with androids brings about a special context, an interval, and a space/time for reflection and imagination that was not there before. During the interaction something happens. There is adaptive learning and as a result, both children and older adults accepted Telenoid, and the children and older adults accepted each other. Using frames of play and ritual, we make sense and ‘capture’ moments of adaptive learning, and the feedback that elicits a social response from all study participants that results in self-efficacy and socialization. While “ritual” refers to the application of what has been learned and “play” means that there are no obvious consequences of what has been learned. This analysis illuminates new understanding about the uncanny valley, cultural robotics and the therapeutic potential of HAI. This has implications for the acceptance of androids in ‘socialized roles’ and gives us insight into the subconscious adaptive learning processes that must take place within humans to accept androids into our society. This approach aims to provides a clearer conceptual basis and vocabulary for further research of android and humanoid development.
... Jednym jest jak najdokładniejsze odtworzenie wyglądu głowy ludzkiej, a drugim jest stworzenie sympatycznej artystycznej reprezentacji głowy, rodzaju kukiełki. Przykładem pierwszego podejścia są roboty Replee i Geminoid stworzone przez Hiroshi Ishiguro [33,36], natomiast drugiego robot Flash stworzony na Politechnice Wrocławskiej [42]. W tym drugim przypadku istotnym jest, by twarz kukiełki była wystarczająco ekspresywna, aby przekazać bogactwo emocji, a co więcej, by te emocje były czytelne dla interlokutora -stąd jest to problem rodem ze sztuki tworzenia postaci animowanych, a więc mamy tu do czynienia ze stykiem nauk technicznych ze sztuką. ...
In order to assess the impact of robots on society, it is necessary to carefully analyze the state-of-the-art, and in particular the fundamental issues that have yet to be resolved, however having significant impact on the potential societal changes resulting from the development of robotics. The aforementioned impact depends on the level of intelligence of robots, so this aspect dominates in the presented analysis. The presentation has been divided into three parts: 1) analysis of technical factors affecting the intelligence and security of robots, 2) analysis of current capabilities of robots, 3) analysis of diverse predictions of how robotics will evolve, and thus the attitudes towards the influence of the result of this development on society. This part of the paper is devoted to the second of the above mentioned three issues.
... Among the different factors involved in HRI, perhaps the least mediatic, but no less important is related to the humanoid movement of robots [5]. This movement is perceived unconsciously by humans and is not limited just to the robot's arms, but also extends to other movements such as those of the robot's head, eye blinking, mouth movement, hands, etc. [6,7]. ...
Collaborative robots or cobots interact with humans in a common work environment.
In cobots, one under-investigated but important issue is related to their movement and how it is perceived by humans. This paper tries to analyze whether humans prefer a robot moving in a human or in a robotic fashion. To this end, the present work lays out what differentiates the movement performed by an industrial robotic arm from that performed by a human one. The main difference lies in the fact that the robotic movement has a trapezoidal speed profile, while for the human arm, the speed profile is bell-shaped and during complex movements, it can be considered as a sum of superimposed bell-shaped movements. Based on the lognormality principle, a procedure was developed for a robotic arm to perform human-like movements. Both speed profiles were implemented in two industrial robots, namely, an ABB IRB 120 and a Universal Robot UR3. Three tests were used to study the subjects’ preference when seeing both movements and another analyzed the same when interacting with the robot by touching its ends with their fingers.
... A more humanoid machine elicits a greater sense of agency (Barlas, 2019;Ciardo et al., 2018), trustworthiness, and a positive attitude (Spatola & Agnieszka, 2021) among users to a certain extent (Mori et al., 2012). Human beings are accustomed to understanding external objects with human features and behaviors (H Ishiguro & Nishio, 2007.) and have tendencies to anthropomorphize objects in the environment (Reeves & Nass, 1998). ...
Objectives
This study examined the published works related to healthcare robotics for older people using the attributes of health, nursing, and the human-computer interaction framework.
Design
An integrative literature review.
Methods
A search strategy captured 55 eligible articles from databases (CINAHL, Embase, IEEE Xplore, and PubMed) and hand-searching approaches. Bibliometric and content analyses grounded on the health and nursing attributes and human-computer interaction framework were performed using MAXQDA. Finally, results were verified using critical friend feedback by a second reviewer.
Results
Most articles were from multiple authorship, published in non-nursing journals, and originating from developed economies. They primarily focused on applying healthcare robots in practice settings, physical health, and communication tasks. Using the human-computer interaction framework, it was found that older adults frequently served as the primary users while nurses, healthcare providers, and researchers functioned as secondary users and operators. Research articles focused on the usability, functionality, and acceptability of robotic systems. At the same time, theoretical papers explored the frameworks and the value of empathy and emotion in robots, human-computer interaction and nursing models and theories supporting healthcare practice, and gerontechnology. Current robotic systems are less anthropomorphic, operated through real-time direct and supervisory inputs, and mainly equipped with visual and auditory sensors and actuators with limited capability in performing health assessments.
Conclusion
Results communicate the need for technological competency among nurses, advancements in increasing healthcare robot humanness, and the importance of conscientious efforts from an interdisciplinary research team in improving robotic system usability and utility for the care of older adults.
... Other studies used confederates as interaction partners and tested live emotional interactions (e.g., Vaughan and Lanzetta, 1980), but this strategy can lack rigorous control of confederates' behaviors (Bavelas and Healing, 2013;Kuhlen and Brennan, 2013). Androids-that is, humanoid robots that exhibit appearances and behaviors that closely resemble those of humans (Ishiguro and Nishio, 2007)-could become an important tool for testing live face-to-face emotional interactions with rigorous control. ...
... It has 35 actuators: 29 for facial muscle actions, 3 for head movement (roll, pitch, and yaw rotation), and 3 for eyeball control (pan movements of the individual eyeballs and tilt movements of both eyeballs). The facial and head movements are driven by pneumatic (air) actuators, which create safe, silent, and human-like motions (Ishiguro and Nishio, 2007;Minato et al., 2007). The pneumatic actuators are controlled by an air pressure control valve. ...
... The construction of effective android software and hardware requires that the mechanisms of psychological theories be elucidated. We expect that this constructivist approach to developing and testing androids (Ishiguro and Nishio, 2007;Minato et al., 2007) will be a useful methodology for understanding the psychological mechanisms underlying human emotional interaction. ...
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids’ facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola’s emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
... The applications of robots are not limited to industrial manufacturing, and the rapid advancement of technology has been accompanied by an increase in the development of social robots, which are designed to socially interact with humans, for instance, by contributing to healthcare for older people (e.g., Paro), teaching autistic children (e.g., NAO), and acting as guides in public places (Broadbent, 2017;Dahl & Boulos, 2013;Gates, 2007;Han et al., 2015;Sabelli & Kanda, 2016). To make the robots around us acceptable and enjoyable and to enhance the quality of human-robot interaction (HRI), robots are built to have a humanlike appearance (e.g., android robots; Broadbent et al., 2013;Ishiguro & Nishio, 2007;Rosenthal-von der Pütten & Krämer, 2014). However, does an increasingly humanlike appearance lead to increased likeability of robots? ...
... Robots with these functions appear to be human-minded and to have the mental capacity to act, plan and exert self-control (i.e., agency) as well as to feel and sense (i.e., experience; Appel et al., 2016;Broadbent, 2017;Gray & Wegner, 2012;Stafford et al., 2014). To promote the acceptability of robots, the human-likeness of a robot's appearance is just one consideration, and engineers have started to consider humanlike mental capacities as another avenue (Fong et al., 2003;Ishiguro & Nishio, 2007). Hence, in recent years, academic interest has started to shift toward a new facet of humanlike robots, namely, the mental capacities of artificial systems (Gray & Wegner, 2012;Stein & Ohler, 2017). ...
Robots are being used to socially interact with humans. To enhance the quality of human-robot interaction, engineers aim to build robots with both a humanlike appearance and high mental capacity, but there is a lack of empirical evidence regarding how these two characteristics jointly affect people’s emotional response to robots. The current two experiments (each N = 80) presented robots with either a mechanical or humanlike appearance, with mental capacities operationalized as low or high, and with either self-oriented mentalization to mainly concentrate on the robot itself or other-oriented mentalization to read others’ minds. It was found that when the robots had a humanlike appearance, they were more dislikeable than when they had a mechanical appearance, replicating the uncanny valley effect for appearance. Importantly, given a humanlike appearance, robots with high mental ability elicited stronger dislike than those with low mental ability, showing an uncanny valley effect for mind, but this difference was absent for robots with a mechanical appearance. In addition, this effect was limited to robots with self-oriented mentalization ability and did not extend to robots with other-oriented mentalization ability. Hence, the exterior appearance and interior mental capacity of robots interact to influence people’s emotional reaction to them, and the uncanny valley as it pertains to the mind depends on the robot’s appearance in addition to its mental ability. This implies that social robots with humanlike appearances should be designed with obvious other-directed social abilities to make them more likeable.
... It is still only a hypothesis and the amount of empirical proof remains scarce [70,75], so it is unclear whether or not this hypothesis holds, simply because a humanoid robot that perfectly looks and behaves as a human does not yet exist. Regardless, there has been struggle to overcome this uncanny valley [76], suggesting that appearance does matter (see Philip K. Dick android [77], Geminoid HI-1 [78] or HRP-4C [79] humanoid robots). ...
... Humanoid robot heads appearance is considered a major concern by some, and there has been an emergence of android robot heads in humanoid robots [28,71,[76][77][78][79]. Androids have been defined as humanoid robots with an appearance that closely resembles that of humans, possessing traits such as artificial skin and hair (see [80] for an example), and capable of linguistic and para-linguistic communication [76]. ...
... Most of these expressive face heads android faces are intended to have the appearance of adults, although there are some examples of childandroids like Barthoc Jr. [126] and CB 2 [86]. One very interesting feature of Repliee Q1 and Geminoid HI-1 androids [78] is the implementation of micro-motions. We consider micro-motions to be a type of appearance-enhancing design parameter because it is something that is constantly being displayed (breathing motion or shoulder movements). ...
We conducted a literature review on sensor heads for humanoid robots. A strong case is made on topics involved in human robot interaction. Having found that vision is the most abundant perception system among sensor heads for humanoid robots, we included a review of control techniques for humanoid active vision. We provide historical insight and inform on current robotic head design and applications. Information is chronologically organized whenever possible and exposes trends in control techniques, mechanical design, periodical advances and overall philosophy. We found that there are two main types of humanoid robot heads which we propose to classify as either non-expressive face robot heads or expressive face robot heads. We expose their respective characteristics and provide some ideas on design and vision control considerations for humanoid robot heads involved in human robot interactions.