ArticlePDF Available

Mr Robot: Will androids ever be able to convince people they are human? Guardian Online

Authors:

Abstract

Guardian News Online (2019) The Multimodal Turing Test for Androids
... In consideration, this study introduces a novel evaluation model for RHRs named The Multimodal Turing Test (MTT) as a comprehensive evaluation methodology for assessing an RHRs appearance/materiality, speech, movement and AI, against a living human counterpart. This approach will provide RHR engineers with a greater holistic, comprehensive and graded evaluation procedure than existing evaluation methods in HRI & (Mathieson, 2019). Finally, Searle (1980) created the 'Chinese Room' experiment to highlight that there are no visual distinctions between functional logic and meaning. ...
... In consideration, this study aims to lay the foundations of a comprehensive theoretical evaluation methodology named the Multimodal Turing Test (MTT) to determine if RHRs can attain a level of emulation perceptually indivisible from a human being, as discussed in an article for 'Futurism', (Houser, 2019). As cited in a recent article in the Guardian News online, the MTT is more holistic than the original Turing Test, and previous evaluation methods in HRI by evaluating an RHR's appearance (including materiality in RHRs), speech, kinetics and AI (Mathieson, 2019), shown in Fig. 4 As stated in the findings of the literature review, equal consideration of the appearance and functionality of RHRs is essential to developing higher modes of human emulation. ...
Thesis
Full-text available
The human face is the most natural interface for face-to-face communication, and the human form is the most effective design for traversing the human-made areas of the planet. Thus, developing realistic humanoid robots (RHRs) with artificial intelligence (AI) permits humans to interact with technology and integrate it into society in a naturalistic manner insurmountable to any other form of non-biological human emulation. However, RHRs have yet to attain a level of emulation that is indistinguishable from the human condition and fall into the uncanny valley (UV). The UV represents a dip in human perception, where affinity decreases with heightened levels of human likeness. Per qualified research into the UV, artificial eyes and mouths are the primary propagators of the uncanny valley effect (UVE) and reduce human likeness and affinity towards RHRs. In consideration, this thesis introduces, tests and comparatively assesses a pair of novel robotic eye prototypes with dilating pupils capable of simultaneously replicating the natural pupilar responses of the human eyes to light and emotion. The robotic pupil systems act as visual signifiers of sentience and emotion to enhance eye contact interfacing in human-robot interaction (HRI). Secondly, this study presents the design, development and application of a novel robotic mouth system with buccinator actuators and custom machine learning (ML) speech synthesis to mouth articulation application for forming complex lip shapes (visemes) to emulate human mouth and lip patterns to vowel and consonant sounds. The robotic eyes and mouth system were installed in two RHRs named ‘Euclid and Baudi’ and tested for accuracy and processing rate against a living human counterpart. The results of these experiments indicated that the robotic eyes are capable of dilating within the average pupil range of the human eyes to light and emotion, and the robotic mouth operated with a 86.7% accuracy rating when compared against the lip movement of a human mouth during verbal communication. An HRI experiment was conducted using the RHRs and biometric sensors to monitor the physiological responses of test subjects for cross-analysis with a questionnaire. The sample consists of twenty individuals with experience in AI and robotics and related fields to examine the authenticity, accuracy and functionality of the prototypes. The robotic mouth prototype achieved 20/20 for aesthetical, and lip synchronisation accuracy compared to a robotic mouth with the buccinator actuators deactivated, heightening the potential application of the system in future RHR design. However, to reduce influential factors, test subjects were not informed of the dilating eye system, which resulted in 2/20 of test subjects noticing the pupil dilation sequences to emotive facial expressions (FEs) and light. Moreover, the eye contact behaviours of the RHRs was more significant than pupil dilation FEs and eye aesthetics during HRI, counter to previous research in the UVE in HRI. Finally, this study outlines a novel theoretical evaluation framework founded on the 1950 Turing Test (TT) for AI, named The Multimodal Turing Test (MTT) for evaluating human-likeness and interpersonal and intrapersonal intelligence in RHR design and realistic virtual humanoids (RVHs) with embodied artificial intelligence (EAI). The MTT is significant in RHR development as current methods of evaluation, such as The Total Turing Test (TTT), Truly Total Turing Test (TTTT) Robot Turing Test (RTT), Turing Handshake Test (THT), Handshake Turing Test (HTT) and TT are not nuanced and comprehensive enough to evaluate the functions of an RHR/RVH simultaneously to pinpoint the causes of the UVE. Furthermore, unlike previous methods, the MTT provides engineers with a developmental framework to assess degrees of human-likeness in RHR and RVH design towards more advanced and accurate modes of human emulation.
... The RHRs developed in this study are named Baudi and Euclid, and discussed in press releases [100][101][102]. The RHRs implement Amazon Lex Deep Learning (DL) AI and Amazon Polly speech synthesis (SS) software to converse with people naturally. ...
... RHRs are typically young in appearance and designed without skin imperfections or blemishes [33,100], which is not an accurate representation of the natural tonal variance and ageing of human skin. This aesthetical consideration is significant to RHR design as, although Baudi's skin is not flawless, the RHR has no skin wrinkles and fewer skin imperfections. ...
Article
Full-text available
Realistic humanoid robots (RHRs) with embodied artificial intelligence (EAI) have numerous applications in society as the human face is the most natural interface for communication and the human body the most effective form for traversing the manmade areas of the planet. Thus, developing RHRs with high degrees of human-likeness provides a lifelike vessel for humans to physically and naturally interact with technology in a manner insurmountable to any other form of non-biological human emulation. This study outlines a human-robot interaction (HRI) experiment employing two automated RHRs with a contrasting appearance and personality. The selective sample group employed in this study is composed of 20 individuals, categorised by age and gender for a diverse statistical analysis. Galvanic skin response, facial expression analysis, and AI analytics permitted cross-analysis of biometric and AI data with participant testimonies to reify the results. This study concludes that younger test subjects preferred HRI with a younger-looking RHR and the more senior age group with an older looking RHR. Moreover, the female test group preferred HRI with an RHR with a younger appearance and male subjects with an older looking RHR. This research is useful for modelling the appearance and personality of RHRs with EAI for specific jobs such as care for the elderly and social companions for the young, isolated, and vulnerable.
ResearchGate has not been able to resolve any references for this publication.