Chapter

From Robots to Humanoids: Examining an Ethical View of Social Robotics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Technology enables robots to have a voice, interact with humans, and execute a series of commands through artificial intelligence (AI) and human–robot interaction (HRI). The notion that robots can surpass humans implies that they can become conscious through five human senses along with ethics as the sixth sense. The sixth sense of ethics in robots can produce an efficient and versatile humanoid. Therefore, the key concept that emerges in our chapter is the ethical dilemma of artificial intelligence (ED-AI) and addresses the question whether the sixth sense can play a role in the robotic paradigms and perform relationship-building behaviors with a positive interaction.KeywordsArtificial intelligenceConsciousnessHuman–robot interactionRobotsRobotic systemsEthical dilemma of artificial intelligence (ED-AI)Technology governanceSixth sense

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Recently there has been more attention to the cultural aspects of social robots. This paper contributes to this effort by offering a philosophical, in particular Wittgensteinian framework for conceptualizing in what sense and how robots are related to culture and by exploring what it would mean to create an “Ubuntu Robot”. In addition, the paper gestures towards a more culturally diverse and more relational approach to social robotics and emphasizes the role technology can play in addressing the challenges of modernity and in assisting cultural change: it argues that robots can help us to engage in cultural dialogue, reflect on our own culture, and change how we do things. In this way, the paper contributes to the growing literature on cross-cultural approaches to social robotics.
Article
Full-text available
Along with its potential contributions to the practice of care, social assistive robotics raises significant ethical issues. The growing development of this technoscientific field of intelligent robotics has thus triggered a widespread proliferation of ethical attention towards its disruptive potential. However, the current landscape of ethical debate is fragmented and conceptually disordered, endangering ethics’ practical strength for normatively addressing these challenges. This paper presents a critical literature review of the ethical issues of social assistive robotics, which provides a comprehensive and intelligible overview of the current ethical approach to this technoscientific field. On the one hand, ethical issues have been identified, quantitatively analyzed and categorized in three main thematic groups. Namely: Well-being, Care, and Justice. On the other hand –and on the basis of some significant disclosed tendencies of the current approach–, future lines of research and issues regarding the enrichment of the ethical gaze on social assistive robotics have been identified and outlined.
Article
Full-text available
Frontline robots autonomously perform physical service tasks. Because of continuous improvements in their underlying technologies, customers may perceive these robots as social actors with a high level of humanness, both in appearance and behavior. Advancing from mere theoretical contributions to this research field, this article proposes and empirically validates the humanness-value-loyalty model (HVL model). Based on social categorization and social cognition theory, this work empirically analyzes to what extent robots' perceived physical human-likeness, perceived competence, and perceived warmth affect customers' service value expectations and, subsequently, their loyalty intentions. Following two pretests to select the most suitable robots and ensure scenario realism, data were collected by means of a vignette experimental study and analyzed using the partial least squares method. The results reveal that human-likeness positively affects four dimensions of service value expectations. In addition, the perceived competence of the robot influences mainly utilitarian expectations (i.e., functional and monetary value), while perceived warmth influences relational expectations (i.e. emotional value). Interestingly, and contrary to theoretical predictions, the influence of the robot's warmth on service value expectations is more pronounced for customers with a lower need for social interaction. In sum, this research contributes to a better understanding of customers' reactions to artificial intelligence-enabled technologies with humanized cognitive capabilities and also suggests interesting research avenues to advance on this emerging field.
Article
Full-text available
This article explains the complex intertwinement between public and private regulators in the case of robot technology. Public policymaking ensures broad multi-stakeholder protected scope, but its abstractness often fails in intelligibility and applicability. Private standards, on the contrary, are more concrete and applicable, but most of the times they are voluntary and reflect industry interests. The ‘better regulation’ approach of the EU may increase the use of evidence to inform policy and lawmaking, and the involvement of different stakeholders. Current hard-lawmaking instruments do not appear to take advantage of the knowledge produced by standard-based regulations, virtually wasting their potential benefits. This fact affects the legal certainty with regards to a fast-paced changing environment like robotics. In this paper, we investigate the challenges of overlapping public/private regulatory initiatives that govern robot technologies in general, and in the concrete of healthcare robot technologies. We wonder until what extent robotics should be governed only by standards. We also reflect on how public policymaking could increase their technical understanding of robot technology to devise an applicable and comprehensive framework for this technology. In this respect, we propose different ways to integrate the technical know-how into policymaking (e.g., collecting the data/knowledge generated from the impact assessments in shared data repositories, and using it for evidence-based policies) and to strengthen the legitimacy of standards.
Article
Full-text available
A smart speaker is a wireless device with artificial intelligence that can be activated through voice command. The artificial intelligence interacts in the form of a virtual personal assistant, such as Amazon’s “Alexa”. Companies are currently creating voice applications for smart speakers that allow consumers to use the personal assistant to perform tasks, such as acquiring information and ordering products. This is the dawn of a new type of interaction between brands and consumers, a new touchpoint. Brands need to catch the vision and create content for this new technology, content that is useful in the buying process. The purpose of this study is to determine what types of marketing messages people find acceptable on smart speakers. Based on the findings, a cognitive message strategy is effective with smart speakers. The paper presents three types of executional frameworks that are best suited for the design of a cognitive message: authoritative, testimonial, and slice-of-life. The overriding requirement for marketing on smart speakers is that the message must provide value to the listener.
Article
Full-text available
The idea of machines overcoming humans can be intrinsically related to conscious machines. Surpassing humans would mean replicating, reaching and exceeding key distinctive properties of human beings, for example, high-level cognition associated with conscious perception. However, can computers be compared with humans? Can computers become conscious? Can computers outstrip human capabilities? These are paradoxical and controversial questions, particularly because there are many hidden assumptions and misconceptions about the understanding of the brain. In this sense, it is necessary to first explore these assumptions and then suggest how the specific information processing of brains would be replicated by machines. Therefore, this article will discuss a subset of human capabilities and the connection with conscious behavior, secondly, a prototype theory of consciousness will be explored and machines will be classified according to this framework. Finally, this analysis will show the paradoxical conclusion that trying to achieve conscious machines to beat humans implies that computers will never completely exceed human capabilities, or if the computer were to do it, the machine should not be considered a computer anymore.
Article
Full-text available
The expanding ability of robots to take unsupervised decisions renders it imperative that mechanisms are in place to guarantee the safety of their behavior. Moreover, intelligent autonomous robots should be more than safe; arguably they should also be explicitly ethical. In this paper, we put forward a method for implementing ethical behaviour in robots inspired by the simulation theory of cognition. In contrast to existing frameworks for robot ethics, our approach does not rely on the verification of logic statements. Rather, it utilises internal simulations which allow the robot to simulate actions and predict their consequences. Therefore, our method is a form of robotic imagery. To demonstrate the proposed architecture, we implement a version of this architecture on a humanoid NAO robot so that it behaves according to Asimov’s laws of robotics. In a series of four experiments, using a second NAO robot as a proxy for the human, we demonstrate that the Ethical Layer enables the robot to prevent the human from coming to harm in simple test scenarios.
Book
Full-text available
This collection examines implications of technological automation to global prosperity and peace. Focusing on robots, information communication technologies, and other automation technologies, it offers brief interventions that assess how automation may alter extant political, social, and economic institutions, norms, and practices that comprise the global political economy. In doing so, this collection deals directly with such issues as automated production, trade, war, state sanctioned robot violence, financial speculation, transnational crime, and policy decision making. This interdisciplinary volume will appeal to students, scholars and practitioners grappling with political, economic, and social problems that arise from rapid technological change that automates the prospects for human prosperity and peace.
Book
This book brings together the fields of artificial intelligence (often known as A.I.) and inclusive education in order to speculate on the future of teaching and learning in increasingly diverse social, cultural, emotional, and linguistic educational contexts. This book addresses a pressing need to understand how future educational practices can promote equity and equality, while at the same time adopting A.I. systems that are oriented towards automation, standardisation and efficiency. The contributions in this edited volume appeal to scholars and students with an interest in forming a critical understanding of the development of A.I. for education, as well as an interest in how the processes of inclusive education might be shaped by future technologies. Grounded in theoretical engagement, establishing key challenges for future practice, and outlining the latest research, this book offers a comprehensive overview of the complex issues arising from the convergence of A.I. technologies and the necessity of developing inclusive teaching and learning. To date, there has been little in the way of direct association between research and practice in these domains: A.I. has been a predominantly technical field of research and development, and while intelligent computer systems and ‘smart’ software are being increasingly applied in many areas of industry, economics, social life, and education itself, a specific engagement with the agenda of inclusion appears lacking. Although such technology offers exciting possibilities for education, including software that is designed to ‘personalise’ learning or adapt to learner behaviours, these developments are accompanied by growing concerns about the in-built biases involved in machine learning techniques driven by ‘big data’.
Article
Today's customers are sophisticated. They demand product variety, functionality and performance. To survive in this arena, successful companies in a global economy must rapidly introduce new products (new product lines or improvements to existing lines) by collapsing their product development times. Vincent Mabert, John Muth and Roger Schmenner report results from a comparative case study of six new product introduction projects at six different firms, identifying those elements that are important to product introduction lead time and how they are influenced by customer and organizational and technical factors. They note that the new product innovation process is very complex, sensitive to external forces like customer demands or expectations and to internal issues like how team leadership is defined for the development team. The article describes the participating companies and analyzes the six projects with particular attention to four structural elements: motivation, workings of teams, external vendor's cooperation with the teams and project control. The authors conclude by identifying the top priority factors influencing new product introduction time.
Chapter
Technical HCI research seeks to improve the world by expanding the set of things that can be done with computational systems. This chapter considers this work as invention—the creation of new things—contrasted with activities of discovery which are concerned more with understanding the world. We discuss the values, goals, and criteria for success in this approach. Technical HCI research includes both directly contributing to some human need and indirectly contributingby enabling other technical work with things like toolkits.
Article
The authors contend that qualitative research should be scrutinized for its usefulness in the discovery of substantive theory. They try to present generic elements of the process of generating substantive theory from qualitative data, and consider how the researcher collects and analyzes qualitative data, max imizes the theory's credibility, puts trust in his theory, and conveys the theory to others. Drs. Glaser and Strauss are affiliated with the University of California Medical Center in San Francisco.
Article
There has been a heavy emphasis in new product development (NPD) research on intrateam issues such as communication, trust, and conflict management. Interpersonal cohesiveness, however, has received scant attention. In addition, there are conflicting findings regarding the effects of close-knit teams, which seem to have a beneficial effect up to a point, after which the tight bond becomes a detriment. This paper addresses these issues by introducing an exploratory model of interpersonal cohesiveness→NPD performance that includes antecedents, consequences, and moderating factors. Antecedents of interpersonal cohesiveness include clan culture, formalization, integration, and political dominance of one department, while consequences are groupthink, superordinate identity, and, ultimately, external/internal new product (NP) performance. The relationships among interpersonal cohesiveness, groupthink, and superordinate identity appear to be influenced by two moderating factors: team norms and goal support. Additionally, product type is identified as a moderator on the effects of both groupthink and superordinate identity on external NP performance. The model is built from two sources: a synthesis of the literature in small group dynamics and NPD, and qualitative research conducted across 12 NPD teams. Individual team leaders were interviewed first, followed by interviews with two additional members on each team, for a total of 36 interviews. In keeping with the goals of qualitative research, the interviews and analysis were used to identify and define aspects of interpersonal cohesiveness rather than to test a preconceived model. Representation of different industries and product types was sought intentionally, and variance in NP innovativeness as well as in NP market success/profitability became key criteria in sample selection. The exploratory model and propositions developed in this study provide a framework for understanding the role of interpersonal cohesiveness in NPD teams and its direct and indirect effects on NP performance. Although a significant amount of research on cohesiveness has been conducted in previous studies of small groups, the narrow laboratory settings of that research have limited the generalizability of the findings. This study therefore serves as a useful starting point for future theory development involving interpersonal cohesiveness in NPD. It also provides a guide for managers in dealing with team cohesiveness.
Article
Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the face of unexpected damage. We describe a robot that can recover from such change autonomously, through continuous self-modeling. A four-legged machine uses actuation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, it adapts the self-models, leading to the generation of alternative gaits. This concept may help develop more robust machines and shed light on self-modeling in animals.
Tailor Brands: Artificial Intelligence-driven branding. Harvard Business School Case 519-017
  • J Avery
Top 9 Ethical Issues in Artificial Intelligence
  • J Bossmann
Meet the first-ever robot citizen-a humanoid named Sophia that once said it would ‘destroy humans
  • C Weller
John McCarthy Computer History Museum
  • Org Computerhistory
Human-robot affective co-evolution
  • L Damiano
  • P Dumouchel
  • H Lehmann
Sensors and Machine Learning are Giving Robots a Sixth Sense
  • T Hornigold
A robotic hand that gives sixth sense back to users
  • Greg Nichols
Research develop AI with a sixth sense
  • L Harrison
School of Electrical and Electronic Engineering
  • Sciencedirect
  • Com
Is Cybersecurity the sixth sense of AI?
  • C Stancombe