BookPDF Available

Robots in Care and Everyday Life

Authors:

Abstract and Figures

Provides a comparative empirical analysis of human-robot interaction in everyday life Evaluates the social and ethical issues related to robots in human contexts. Brings together an interdisciplinary group of scholars on the key issue of how robots will shape future life This book is open access via https://link.springer.com/book/10.1007/978-3-031-11447-2#book-header
Content may be subject to copyright.
Robots in Care and
Everyday Life
Future, Ethics, Social
Acceptance
SpringerBriefs in Sociology
Uwe Engel
Editor
SpringerBriefs in Sociology
SpringerBriefs in Sociology are concise summaries of cutting-edge research and
practical applications across the eld of sociology. These compact monographs are
refereed by and under the editorial supervision of scholars in Sociology or cognate
elds. Volumes are 50 to 125 pages (approximately 20,000- 70,000 words), with a
clear focus. The series covers a range of content from professional to academic such
as snapshots of hot and/or emerging topics, in-depth case studies, and timely reports
of state-of-the art analytical techniques. The scope of the series spans the entire eld
of Sociology, with a view to signicantly advance research. The character of the
series is international and multi-disciplinary and will include research areas such as:
health, medical, intervention studies, cross-cultural studies, race/class/gender, chil-
dren, youth, education, work and organizational issues, relationships, religion,
ageing, violence, inequality, critical theory, culture, political sociology, social psy-
chology, and so on. Volumes in the series may analyze past, present and/or future
trends, as well as their determinants and consequences. Both solicited and
unsolicited manuscripts are considered for publication in this series. SpringerBriefs
in Sociology will be of interest to a wide range of individuals, including sociologists,
psychologists, economists, philosophers, health researchers, as well as practitioners
across the social sciences. Briefs will be published as part of Springers eBook
collection, with millions of users worldwide. In addition, Briefs will be available for
individual print and electronic purchase. Briefs are characterized by fast, global
electronic dissemination, standard publishing contracts, easy-to-use manuscript
preparation and formatting guidelines, and expedited production schedules. We
aim for publication 8-12 weeks after acceptance.
Uwe Engel
Editor
Robots in Care and Everyday
Life
Future, Ethics, Social Acceptance
Editor
Uwe Engel
Department of Social Sciences
University of Bremen
Bremen, Germany
ISSN 2212-6368 ISSN 2212-6376 (electronic)
SpringerBriefs in Sociology
ISBN 978-3-031-11446-5 ISBN 978-3-031-11447-2 (eBook)
https://doi.org/10.1007/978-3-031-11447-2
©The Author(s) 2023. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International
License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation,
distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons license and indicate if changes
were made.
The images or other third party material in this book are included in the books Creative Commons license,
unless indicated otherwise in a credit line to the material. If material is not included in the books Creative
Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specic statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional afliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Articial intelligence represents a key technology that is already changing the world
today, with the expectation of changing the world in much more fundamental ways
in the future. The widespread reluctance of sociology to deal with this challenge is
more than astonishing. We still observe a lack of methodologically trustworthy data
from social research. For example, the European Social Survey, the agship of
European social research, has not provided any such data to date; Eurobarometer
studies do occasionally provide at least some smaller question modules. That is
not much.
Thus, we wanted to contribute to closing this research gap by providing themat-
ically more extensive and differentiated survey data, even if this were only possible
in a local sample of the Free Hanseatic City of Bremen. But we also wanted to help
close an additional research gap. The key questions were: In what way will AI
change society, and how will the interaction with robots change peoples
everyday life? Although we cannot provide precise forecasts, we can show which
developments experts do expect, from todays perspective. For this, we used the
Delphi method, asking a larger selection of experts from different disciplines for
their scientic assessments.
A sociological investigation at the intersection of AI and society certainly runs the
risk of one-sided alarmism, nor would that be completely unpopular. However, to
avoid any one-sidedness from the outset, we paid much attention to professional
heterogeneity, in terms of the constituency of experts that we asked for their opinions
and the project group itself. This latter group is afliated with two major institutions
at the Bremen science location, the Robotics Innovation Center, Deutsches
Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), and diverse chairs
of the University of Bremen. As the context of each chapter details, these institutions
involve the Robotics Chair and EASE, the Bremen Spatial Cognition Center, the
Civil Law Chair, and the Social Science Methods Centre. The scientic backgrounds
of the project members represent robotics, cognition science, jurisprudence, and
social science.
v
vi Preface
The idea for the Bremen AI Delphiproject was born in the context of the
Digital Traces Workshop, which took place on November 810, 2018, at the
University of Bremen. The Social Science Methods Centre organized the three-day
workshop, and the German Research Foundation (DFG), the federal state of Bremen,
and the Bremen International Graduate School of Social Sciences funded it. During
the workshop, an interdisciplinary group of scholars shared recent advancements in
computational social science and established new research collaborations. Question-
naire construction and elding were realized in 2019. A rst major report to the
public took place on a project-related theme dayat Radio Bremen on January
14, 2020, four weeks after the end of the eld phase. With this volume, we present
the projects major ndings for scientic discussion.
The grand nancial support of the State and University Library Bremen (SuUB)
enables free access to this book. We are extremely grateful to SuUB for this support.
Bremen, Germany Uwe Engel
January 27, 2022
Contents
1 Trustworthiness and Well-Being: The Ethical, Legal,
and Social Challenge of Robotic Assistance .................... 1
Michael Beetz, Uwe Engel, Nina Hoyer, Lorenz Kähler, Hagen Langer,
Holger Schultheis, and Sirko Straube
2 Articial Intelligence and the Labor Market: Expected
Development and Ethical Concerns in the German
and European Context ................................... 27
Uwe Engel and Lena Dahlhaus
3 The Bremen AI Delphi Study .............................. 49
Uwe Engel and Lena Dahlhaus
4 The Challenge of Autonomy: What We Can Learn from
Research on Robots Designed for Harsh Environments .......... 57
Sirko Straube, Nina Hoyer, Niels Will, and Frank Kirchner
5 The Legal Challenge of Robotic Assistance .................... 81
Lorenz Kähler and Jörn Linderkamp
6 Cognition-Enabled Robots Assist in Care and Everyday Life:
Perspectives, Challenges, and Current Views and Insights ........ 103
Michael Beetz, Uwe Engel, and Hagen Langer
7 Ethical Challenges of Assistive Robotics in the Elderly
Care: Review and Reection ............................... 121
Mona Abdel-Keream
vii
Contributors
Mona Abdel-Keream University of Bremen, Bremen, Germany
Michael Beetz University of Bremen, Bremen, Germany
Lena Dahlhaus University of Oldenburg, Oldenburg, Germany
Uwe Engel University of Bremen, Bremen, Germany
Nina Hoyer Robotics Research Group, University of Bremen, Bremen, Germany
Robotics Innovation Center, Deutsches Forschungszentrum für Künstliche
Intelligenz GmbH (DFKI), Bremen, Germany
Lorenz Kähler University of Bremen, Bremen, Germany
Frank Kirchner Robotics Innovation Center, Deutsches Forschungszentrum für
Künstliche Intelligenz GmbH (DFKI), Bremen, Germany
Robotics Research Group, University of Bremen, Bremen, Germany
Hagen Langer University of Bremen, Bremen, Germany
Jörn Linderkamp University of Bremen, Bremen, Germany
Holger Schultheis University of Bremen, Bremen, Germany
Sirko Straube Robotics Innovation Center, Deutsches Forschungszentrum für
Künstliche Intelligenz GmbH (DFKI), Bremen, Germany
Niels Will Robotics Innovation Center, Deutsches Forschungszentrum für
Künstliche Intelligenz GmbH (DFKI), Bremen, Germany
ix
Chapter 1
Trustworthiness and Well-Being: The
Ethical, Legal, and Social Challenge
of Robotic Assistance
Michael Beetz, Uwe Engel, Nina Hoyer, Lorenz Kähler, Hagen Langer,
Holger Schultheis, and Sirko Straube
Abstract If a technology lacks social acceptance, it cannot realize dissemination
into society. The chapter thus illuminates the ethical, legal, and social implications of
robotic assistance in care and daily life. It outlines a conceptual framework and
identies patterns of trust in humanrobot interaction. The analysis relates trust in
robotic assistance and its anticipated use to open-mindedness toward technical
innovation and reports evidence that this self-image unfolds its psychological impact
on accepting robotic assistance through the imagined well-being that scenarios of
future humanrobot interaction evoke in people today. All ndings come from the
population survey of the Bremen AI Delphi study.
Keywords Articial intelligence · AI · Robots · Robotic assistance · Trust ·
Trustworthiness · Social acceptance · Ethics · Humanrobot interaction · Well-
being · Care · Everyday life
1.1 Introduction
That articial intelligence and robots will change life is widely expected. Interna-
tional competition alone will ensure continuing investments in this key technology.
No country will be able to maintain its economic competitiveness if it does not invest
M. Beetz · U. Engel (*) · L. Kähler · H. Langer · H. Schultheis
University of Bremen, Bremen, Germany
e-mail: uengel@uni-bremen.de
N. Hoyer
University of Bremen, Bremen, Germany
Robotics Innovation Center, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
(DFKI), Bremen, Germany
S. Straube
Robotics Innovation Center, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
(DFKI), Bremen, Germany
©The Author(s) 2023
U. Engel (ed.), Robots in Care and Everyday Life, SpringerBriefs in Sociology,
https://doi.org/10.1007/978-3-031-11447-2_1
1
in research and the development of such a key technology. However, this premise
complicates things if AI applications do not meet with the necessary acceptance in a
countrys society, including acceptance by social interest groups and, thus, accep-
tance in the population. Populations in democratically constituted, liberal societies
using to a greater extent technologies that they do not want to use is a difcult
scenario to imagine.
2 M. Beetz et al.
This raises the question of AIssocial and ethical acceptance. How should the
development of this technology advance to gain and secure this acceptance? The key
lies in the perceived trustworthiness of the technology and, consequently, the
reasons that lead people and interest groups to attest to this property of AI and its
applications. For instance, as the Royal Society (2017) puts it, using the example of
machine learning: Continued public condence in the systems that deploy machine
learning will be central to its ongoing success, and therefore to realizing the benets
that it promises across sectors and applications(p. 84).
Trustworthiness
The trustworthiness of AI depends upon its consistency with suitably appearing
normative (political and ethical) beliefs and their underlying interests. Ethical
guidelines, such as those that the EU Commission has published, represent this
approach to trustworthiness very well (European Commission Independent High-
Level Expert Group on Articial Intelligence, 2019). For instance, AI systems
should support human autonomy and decision-making, be technically robust and
take a preventive approach to risks, ensure prevention of harm to privacy, and be
transparent. Also, they should ensure diversity, non-discrimination, fairness, and
accountability. These guidelines went into the ecosystem of trust,a regulatory
framework for AI laid down in the European Commissions White Paper on Arti-
cial Intelligence, in which lack of trustis a main factor holding back a broader
uptake of AI(European Commission, 2020, p. 9). Consequently, a human-centric
approach to the development and use of AI technologies, the protection of EU
values and fundamental rights such as non-discrimination, privacy and data protec-
tion, and the sustainable and efcient use of resources are among the key principles
that guide the European approach(European Commission, 2021, p. 31).
In a broader sense, such an approach to trustworthiness applies to any interest
groups in politics, economy, and society that express normative beliefs in line with
their interests. However, the relevant views are not only those of interest groups but
also those among the population of a country, where normative beliefs determine
whether a technology like AI appears trustworthy. Ideas of fairness, justice, and
transparency are no less relevant for the people than for interest groups. Then, it is
less about the technology itself than about the interests that lie behind its applications
and their integrity. An important use case is in the labor market, for the (pre)selection
of job seekers, described in more detail below.
However, relevant drivers of perceived trustworthiness include not only norma-
tive beliefs but also attitudes, expectations, psychological needs, and the hopes and
fears relating to AI and robots, in a situation where people lack personal experience
with a technology that is still very much in development. In such a situation, trust
depends heavily on whether people trust a technology with which they have had no
primary experience.
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 3
Trust
The ability to develop trust is one of the most important human skills. Self-
condence in ones abilities is certainly a key factor. Trust also plays a paramount
role in peoples lives in many other respectsfor example, from a sociological point
of view, as trust in fellow humans, social institutions, and technology. Social
systems cannot function without trust that is so functional because it helps people
to live and survive, in a world whose complexity always requires more information
and skills than any single person can have. I need not be able to build a car to drive it,
but I must trust that the engineers designed it correctly. Not everyone is a scientist,
but in principle, everyone can develop trust in the expertise of those who have the
necessary scientic skills. In everyday life, verifying whether claims correspond to
reality is often difcult. Then, the only option is to ask yourself whether you want to
believe what you hear and if criteria exist that justify your condence in their
credibility. In short, life in the highly complex modern world does not work without
trust. This applies even more to future technologies, such as AI and robots.
Malle and Ullman (2021, p. 4) cite dictionary entries that dene trustas rm
belief in the reliability, truth, or ability of someone or something;ascondent
expectation of something;astherm belief or condence in the honesty, integrity,
reliability, justice, etc. of another person or thing.In line with these, the authors
relate their own concept of trust to persons and agents, including robots,and
postulate that trusts underlying expectation can apply to multiple different proper-
ties that the other agent might have. They also postulate that these properties make
up four major dimensions of trust: One can trust someone who is reliable, capable,
ethical, and sincere(Malle & Ullman, 2021, p. 4).
The acceptance of AI and robots requires trust and additional ingredients, a
selection of which this chapter highlights. The selection includes the perceived
utility and reliability of AI and robots, as well as their closeness to human life. We
look at a wider array of areas of application, as well as robotic assistance in the
everyday life and care of people. We ask about their respective acceptance, pay
special attention to the role that respondents assign to communication in human
robot interaction, and relate this acceptance (i.e., the anticipated willingness to use)
to patterns of trust in robotic assistance and autonomous AI, using latent variable
analysis. As we detail below, this analysis reveals a pattern that trust in the capabil-
ity, safeness, and ethical adequacy of AI and robots will build.
Well-Being in HumanRobot Interaction
Trust in AI and robots is one key factor; well-being is a second one. Both prove to be
key factors in AI and robots in immediate, everyday human life. People have
communication needs that they expect their social interactions to meet. People
exchange ideas, take part in different types of conversations, express thoughts and
feelings, develop empathy, expect respect and fairnessoccasionally also affection
and touchand also react in interpersonal encounters to content, interaction part-
ners, and the course of such encounters with gestures and facial expressions.
Interpersonal interaction can be a very complex structure comprising basic and
higher needs, mutual expectations, and verbal and extraverbal stimuli and responses.
Complexity is one thing, but social interaction is not only complex. People generally
want to feel comfortable in their encounters with other people and nd recognition
and fairness, and sometimes even morefor example, security. Exceptions prove
the rule, but for many people, the search for appreciation and social recognition is
recognizable as a basic need. People tend to look for pleasant situations and avoid
unpleasant situations as much as possibleat least in general. On the one hand, this
describes a situation of interaction between people that can serve as a benchmark for
the overwhelmingly difcult task of developing robots that may at least partially
substitute for people in such interactions. If people generally expect to have pleasant
interpersonal interactions, they will do the same when interacting with robots. On the
other hand, this describes a situation highly relevant for attempts to gain acceptance
among the population for interactions with robots. This is only possible in the future
because people must evaluate such scenarios of humanrobot interaction through the
emotionally tinted ideas that these scenarios trigger in them today. Since one cannot
have acquired any experience with scenarios that do not yet exist, denitions of trust
that relate to humanrobot interaction cover exactly this uncertainty, as Law and
Scheutz (2021, p. 29) put it:
4 M. Beetz et al.
For example, if persons who have never worked with or programmed a robot before coming
in contact with one, they will likely experience a high level of uncertainty about how the
interaction will unfold. (...) Therefore, people choosing to work with robots despite these
uncertainties display a certain level of trust in the robot. If trust is present, people may be
willing to alter their own behavior based on advice or information provided by the robot. For
robots who work directly and closely with people, this can be an important aspect of a
trusting relationship
The Individuals Self-image
In the present context, we assume that acceptance depends on trust and well-being,
and these factors, in turn, on the image of herself that a person possesses. We assume
particularly that people who see themselves as open to technical innovation are
likely to develop this trust and anticipated well-being, while we expect the opposite
from people who rely less on technical innovation and more on the tried and tested.
Above all, people who always want to be among the rst to try out technical
innovations (early adopters) are likely to be open-minded toward AI and interaction
with robots, at least substantially more often than others.
We also look at people who orient themselves toward science rather than religion,
regarding life issues, a concept that comes from the sociology of religion and refers
to a deeper orientation than just a supercial interest in science (Wohlrab-Sahr &
Kaden, 2013). We take it up in the context of AI because the very concept of articial
intelligence suggests relating it to the natural intelligence of a person, just to
understand what articial intelligence could mean. Without knowledge of the tech-
nical fundamentals of articial intelligence, such as machine learning, AI can
certainly assume a wide variety of meanings, including imaginary content with
religious connotations. Accordingly, we assumed that a religiously shaped self-
image can go hand-in-hand with a comparatively greater reserve toward AI.
Chapter Overview
This chapter presents ndings from the population survey of the Bremen AI Delphi
study. The focus is on trust in robotic assistance and willingness to use it, as well as
the expected personal well-being in humanrobot interaction. Using recent data from
Eurostat, the European Social Survey, and the Eurobarometer survey, Chap. 2
extends the analysis to Germany and the EU. We ask if AI could lead to discrimi-
nation and whether the state should work as a regulatory agency in this regard. While
we conne the exposition to statistical analysis, Chap. 5discusses in detail the legal
challenge of AI. Chapter 2also investigates the worst-case scenario of cutthroat
competition for jobs, using expert ratings from the Delphi. Chapter 3describes the
methodological basis of the study and explains the choice of statistical techniques in
this chapter. Two further interfaces merit particular mention. Chapter 4examines
what one can learn from research on robots designed for harsh environments, while
Chap. 6addresses the communication challengeof humanrobot interaction.
Then, Chap. 7addresses elderly care and the ethical challenges of using assistive
robotics in that eld.
1.2 Acceptance
1.2.1 Potential for Acceptance Meets Skepticism
In Germany, a high potential for AI acceptance prevails, reecting an analysis of
data from three Eurobarometer studies (European Commission, 2012; European
Commission & European Parliament, 2014,2017). These studies posed questions
about the image that people have of robots and AI. Whereas in Germany in 2012, the
proportion of those who all in allhad a veryor fairly positiveimage of robots
was 75%, in 2014, it was 72%. For 2017, the question expanded to include the image
of robots and AI, resulting in 64% choosing a veryor fairlypositive image in
this regard.
A similar picture emerges for our survey in Bremen, where a positive view of
robots and articial intelligence also prevails. A fairly positiveor very positive
image of robots and articial intelligence represent 75% of the responses, and the
same proportion (75%) considers robots and articial intelligence quite probable
or quite certainto be necessary because they can do work that is too heavy or too
dangerous for humans.
1
In addition, 61% consider robots and AI to be good for
society because they help people do their work or do their everyday tasks at home.
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 5
1
The gures in this section were presented in a German-speaking public talk held at the University
of Bremen in early 2020. See the video at https://ml.zmml.uni-bremen.de/video/5e6a5179d42f1
c7b078b4569
The majority even sees the expected consequences of AI for the labor market and
ones own workplace as positive rather than negative, as described below. This is in
line with the result of an analysis of the comparative perception of 14 risks, which we
report in more detail elsewhere (Engel & Dahlhaus, 2022, pp. 353354). There we
asked respondents to rank from a list the ve potential risks that worry them most.
Respondents hardly regarding digitization/articial intelligenceas such a risk
(12th place out of 14) is noteworthy; only the specic risk of abuse/trade of
personal data on the Internetreceived a top placement in this ranking (fourth
place, after climate change,”“political extremism/assaults,and intolerance/hate
on the Internet).
6 M. Beetz et al.
However, at the same time only 33% regard robots and articial intelligence as
quite probableor quite certain”“technologies that are safe for humans.Only
28% view them as reliable (error-free) technologies,and only 24% as trustworthy
technologies.Other indicators also show this very clearly, especially if specic
areas (see below) solicit trust and acceptance. Thus, a high potential for acceptance
meets considerable skepticism and a correspondingly wide scope for exploiting this
potential.
1.2.2 The Closer to Humans, the Greater the Skepticism
toward Robots
In which areas should robots have a role primarily, and in which areas should robots
(if possible) have no role? Table 1.1 shows the list that we gave the respondents to
answer these two separately asked questions. To rule out question-order effects (the
so-called primacy and recency effects), we re-randomized the area sequence for each
interview. The ranking asked for places 1 to 5.
When asked about rst place, 28% named industry, 16% search and rescue
services, 16% space exploration, 10% manufacturing, and 10% marine/deep-sea
research. Four of these ve areas also shape the preference for second place.
There, 26% named marine/deep-sea research, 15% space exploration, 15% industry,
13% health care, and 10% manufacturing. Industry, space exploration, and deep-sea
research also dominate the remaining places, followed by manufacturing and
health care.
Table 1.1 List of areas where robots should be used primarily vs. not be used at all
List of the areas presented in randomized sequence
In industry In caring for people In the leisure sector
In manufacturing In education In transport/logistics
In the service sector In search and rescue services In agriculture
In peoples private everyday lives In space exploration In the military
In health care In marine/deep-sea research In no area
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 7
Table 1.2 Probabilities of areas where robots should be used primarily vs. not be used at all
Probability that an area is part of the respective TOP 5 ranking set
Where should robots
be used primarily?
Pr (area
of TOP 5
¼element
set)
Where should robots, if
possible, not be used at all?
Pr
(area ¼element
of TOP 5 set)
... in the industry 0.7546 Care of people 0.6204
... in space
exploration
0.7454 peoples private lives 0.4954
... in marine/deep-sea
research
0.6852 Education 0.4861
... with search and
rescue services
0.5139 Military 0.3843
... in health care 0.4306 Leisure sector 0.3704
... in manufacturing 0.3889 Service sector 0.2407
... in transport/
logistics
0.3519 Health care 0.1065
... in agriculture 0.1991 Agriculture 0.1065
... at the military 0.1528 No area 0.0880
... in the service
sector
0.0972 Transport/logistics 0.0648
... in caring for
people
0.0741 Search and rescue services 0.0463
... in peoples private
everyday lives
0.0694 Manufacturing 0.0231
... in education 0.0463 Industry 0.0093
...in the leisure sector 0.0370 Space exploration 0.0046
... in no area 0.0185 Marine/deep-sea research 0.0
The preferences at the other pole are also noteworthy. When asked where robots
should not be in use at all, four areas dominate: caring for people, private everyday
life, education, and leisure.
For a more compact picture, we calculated the probability that an area is part of
the respective TOP 5 preference set and plotted the two corresponding distributions
against each other (Table 1.2 and Fig. 1.1). While industry, space exploration, and
marine/deep-sea research are clearly the favorite areas, respondents endorse keeping
three areas free of robots: care of people, peoples private everyday lives, and
education. While these areas polarize responses the most (Fig. 1.1), the following
area clusters do the same, though not as dramatically as the former: search and rescue
services, health care, manufacturing, and transport/logistics, on the one hand; on the
other hand, military, leisure, and service sectors.
For a subset of the areas, an interesting comparison is possible with data for
Germany, collected some years ago as part of a Eurobarometer study (European
Commission, 2012). Figure 1.2 shows the result of this data analysis. Even if the
percentages are not directly comparable across Figs. 1.1 and 1.2 (due to different
calculation bases, partly different question wording), the rough pattern relates them
to one another and reveals remarkable stability over time. As is true today, the use of
robots in space exploration, search and rescue services, and manufacturing had
already met with comparatively high levels of acceptance in 2012; the lack of
acceptance in care, education, and leisure appears similarly stable. Otherwise, two
changes stand out: the use of robots in the military appears more negative today;
conversely, their use in health care appears more positive today.
8 M. Beetz et al.
Fig. 1.1 Where robots should be used primarily vs. not be used at all
Fig. 1.2 Robotic use: Preferred areas against areas that should be banned by law
1.2.3 Respondents Find It Particularly Difcult to Imagine
Conversations with Robots
We foresee an area comprising two challenges, arising on the premise that assistance
robots for the home or for care will only nd acceptance in the long term if they can
interact with people in a way that people perceive as pleasant communication. We
can hardly imagine a humanmachine interaction that aligns with repeated frequent
encounters but does not satisfy human communication needs. This applies to the
extent that humansinclination toward anthropomorphism assigns assistance robots
the role of digital companions in daily interaction (Bovenschulte, 2019; Bartneck
et al., 2020). Programming assistant robots with the appropriate communicative
skills is the rst major challenge; the second lies in the fact that humans still nd
communicating with a robot extremely difcult to imagine at all. This applies to
daily life in general, as Fig. 1.3 and the next paragraph outline, and specically to
robotic assistance in care.
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 9
Fig. 1.3 Imagining that humans communicate with robots and receive help from them: Mean
values (medians) and pertaining upper/lower bounds of the middle 50% of responses
Figure 1.3 displays box plots of the interpolated quartiles (see the appendix,
Table 1.7 for the underlying survey-weighted distributions). The introductory ques-
tion to this block asked if the respondent could imagine conversational situations in
which a robot that specializes in conversations would later keep him/her company at
home. In Fig. 1.3, this appears in the middle of the chart. The pertaining median of
2.3 indicates a mean value slightly above probably not,with the middle 50% of
responses ranging between 1.5 (this value equals a lower bound exactly in between
1¼not at alland 2 ¼probably not) and 3.2 (this upper bound lies slightly above
the 3 ¼possiblythat indicates maximum uncertainty). The respondents consider it
unlikely that a robot will keep them company at home in the future. They are even
less able to imagine special kinds of conversationsfor example, trivial, everyday
conversations, in case a respondent feels lonely or ever needs advice on life issues.
Respondents nearly completely rule out convivial family discussions in which a
robot participates. The same applies to imagining the use of robots that look and
move like a pet (Table 1.8). Only conversations in old age with someone no longer
mobile were not strictly ruled out, though, in this regard too, the mean value remains
slightly below the 3 ¼possiblychoice, and the range of the middle 50% of
responses includes the 2 ¼probably notand excludes the 4 ¼quite probable
at the same time. This is certainly due to the human factorin interpersonal
communication; humans are humans, robots are machines, no matter how excellent
their robotic skills are. Convincing people that robots will later be able to commu-
nicate with people in the same way that humans do with each other today will
probably be very difcult.
10 M. Beetz et al.
1.2.4 Respondents Can Imagine Help with Household Chores
and Care More Easily than Talks with Robots
Can the respondents imagine getting help with household chores? The interview
question was: Research is working on developing robots that will later help people
with household chores. We think of examples of this kind: setting and clearing the
table, loading and unloading the dishwasher, taking crockery out of cupboards and
stowing them back in, fetching and taking away items. For the moment, please
imagine that such household robots are already available today: And regardless of
nancial aspects: Could you imagine receiving help in this way at home?In
Fig. 1.3, the second box plot from the right graphs the pertinent data from
Table 1.7: a mean value (median) of 3.2 (slightly above possibly) and a range
from 2.2 to 4.3 that excludes probably notand includes quite probable.There-
fore, respondents more easily imagined getting help around the house this way than
having conversations with robots.
1.2.5 Robotic Assistance in Care Is as Imaginable as Robotic
Assistance with Household Chores
About the same level of acceptance characterizes robotic assistance in care. The
survey asked respondents to indicate if they would consent to the involvement of an
assistant robot in the care of a close relative and their own care. Two box plots in
Fig. 1.3 graph the pertinent data from Table 1.7 in the appendix. The mean values of
the two distributions lie slightly above possibly,with the middle 50% of responses
clearly excluding probably notand including quite probable,in the case of
respondents care. Expressed in percentages, this implies that a third of respondents
would nd quite probableor quite certainagreeing to the involvement of an
assistant robot in the care of a close relative. This proportion increases from 32.4% to
39.1% for the respondents care (Table 1.3, rows labeled all).
Twenty-seven percent of the respondents indicated that care is a sensitive issue
for them. When asked whether the questions about care may have been perceived as
too personal,73% answered with not at all,19% with a little bit,7% with
fairly personal,and 1% with a lot too personal.Table 1.3 collapses the last three
groups and shows for the resulting sensitivegroup how much this group agrees
with the participation of a robot in care. Then, only 22.8% would consider quite
probableor quite certainthe involvement of an assistant robot in the care of a
close relative, and only 29.3% would agree to the involvement of an assistant robot
in the respondents care (Table 1.3, rows labeled sensitive). Therefore, approval is
signicantly lower if the topic of careis not only of abstract importance. If it is also
personally relevant, the approval values drop by almost 10 percentage points.
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 11
Table 1.3 Consent to robotic assistance in care
Consent to robotic assistance in the care of ...
If
Not at
all
Probably
not Possibly
Quite
probable
Quite
certain
Dont
know
...close
relative
All 13.6% 14.1% 36.2% 22.1% 10.3% 3.8%
Sensitive 5.3% 19.3% 45.6% 19.3% 3.5% 7.0%
...
respondent
All 12.9% 12.9% 32.4% 26.7% 12.4% 2.9%
Sensitive 5.2% 24.1% 36.2% 24.1% 5.2% 5.2%
Sensitiveif survey questions on care were perceived as too personal. Entries: Row percentages
Irrespective of these results, alittlemore than half of the respondents expect the
involvement of assistance robots in care in the future. In the interview we started the
block with questions about care as follows: The need for care is already a major
issue in society, especially for people in need of care and their families themselves.
The situation is made even more difcult by a lack of trained specialists. In research,
this situation has triggered the development of assistance robots for care. This raises
an extremely sensitive question: What would your expectation be: Will it happen
within the next ten years that people and robots in care facilities will share the tasks
of looking after people in need of care?Table 1.4 shows that 51.9% expect this.
However, such a development would not meet with unanimous approval. Only
about a third of the survey participants would rate this positively. We asked if
robots were used to care for people in need of care,would it be perceived as very
good,”“good,”“not so good,or not at all good.Nine percent voted for very good,
26% said it would be good, 37% said it would be not so good, and 23% said it was
not at all good (6% did not know).
1.3 Trust in Robotic Assistance and Autonomous AI
Acceptance presupposes trust, and this trust is only available to a limited extent.
Figure 1.4 shows this for seven indicators. These concern the use cases selection of
job seekers(S), legal advice(L), algorithms(A), and autonomous driving
(C). Again, the results appear as box plots. We refer to Fig. 1.4 and these indicators
in the next sections.
12 M. Beetz et al.
Table 1.4 Expectation that people and robots will share the tasks of care
In future people and robots will share the tasks in care facilities ...
Not at all Probably not Possibly Quite probable Quite certain Dont know
4.3% 13.9% 27.9% 42.3% 9.6% 1.9%
1.3.1 Trust in the Integrity of Applicant Selection
To gain trustworthiness, AI as a technology must appear reliable (error-free) and safe
for humans. But this is not just about the technology itself. Possible hidden interests
on the part of those developing AI or commissioning its development also play a
decisive role, so this is also about the interests behind the technology. From a
normative (ethical or political) point of view, this is clear, for example, in the
recommendations for trustworthy AI, developed for the EU Commission. However,
to gain acceptance, AI must also comply with ethical standards from the populations
perspective, as clearly appears in the example of applicant selection in the labor
market.
We asked the respondents four related questions, starting with: Please imagine,
in large companies, the preselection of applications for vacancies would be carried
out automatically by intelligent software. Would you trust that such a preselection
would only be based on the applicants qualications?In Fig. 1.4, the second box
from left, labeled S: qualied,describes the responses to this survey question,
again in terms of median and upper/lower bound of the interquartile range (also
reported in Engel & Dahlhaus, 2022, p. 359, Table 20.A3). This box corresponds to
a mean value of 2.6, with the middle 50% of responses ranging between 1.5 and 3.6.
Accordingly, the central response tendency is between probably notand possi-
bly,while the middle 50% of the answers include probably notand exclude
quite probable.
We relate this trust to the respondents preference of selection mode and observe
the expected close correlation. Imagine again, in large companies, the preselection
under applications for vacancies would be made automatically by intelligent soft-
ware. What would you personally prefer: automated or human-made preselection?
The percentages in Table 1.9 reveal very clearly that the more the respondents trust
that only qualications count, the more they vote for automated preselection of job
applicants and the less they vote for people preselecting.
A related nding is also noteworthy, concerning the two remaining survey
questions of the present block. They explore the belief that automated preselection
protects applicants from unfair selection. The rst was: Imagine again, in large
companies, the preselection under applications for vacancies would be made auto-
matically by intelligent software. Would you trust that such a preselection would
effectively protect applicants from unfair selection or discrimination?In Fig. 1.4,
this question is labeled S: Fair,the left-most box plot. With a mean value of 2.4
and a lower/upper bound of 1.6 and 3.5 of the middle 50% of responses, respondents
regard this as just as unlikely as only the applicants qualication counting. Though
the respondents less often prefer automated to human applicant preselection
(21% vs. 61.9%; no matter: 11.4%, dont know 5.7%), they consider it possible
that automated preselection guards more effectively against discrimination than
human preselection. The follow-up question was worded that way: Imagine
again, in large companies, the preselection under applications for vacancies would
be made automatically by intelligent software. Would you trust that such a prese-
lection wouldprotect applicants more effectively from unfair selection and discrim-
ination than a human preselection?In Fig. 1.4, this question is labeled S: fairer
(the second box plot from the right). Here, we obtain a mean value of 3.1 (slightly
above possibly) and a lower/upper bound of 2.2 and 4.0 of the middle 50% of
responses that excludes probably notand includes quite probable.
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 13
Fig. 1.4 Trust in robotic assistance and autonomous AI: Mean values (medians) and pertaining
upper/lower bounds of the middle 50% of responses
1.3.2 Legal Advice
AI will likely transform not only simple routine activities but also highly skilled
academic professions. Legal advice is just one example. We wanted to know how
much people trust legal advice when it is delivered by a robot: Please imagine that
you need legal advice and that you contact a law rm on the Internet. There a robot
takes over the initial consultation. Would you trust that it can advise you compe-
tently?In Fig. 1.4, this item is labeled L: competent.The pertaining quartiles are
Q
1
¼1.9, Q
2
¼2.8, and Q
3
¼3.7. They indicate a mean response slightly below
possiblyand a middle range of responses that includes probably notbut
excludes quite probable.
¼ ¼ ¼
¼ ¼
14 M. Beetz et al.
1.3.3 Algorithms
Relating to algorithms, uncertainty and skepticism also prevail. Despite wide use of
comparison portals, do people trust them? We asked: Please imagine that you are
looking for a comparison portal on the Internet to buy a product or service there.
Would you trust that the algorithm would show you the best comparison options in
each case?In Fig. 1.4, the item is labeled A: best options.Here, the major
response tendency is uncertaintyin a double sense: a mean tendency slightly
below possibly,with the middle 50% of responses excluding both probably
notand quite probable(Q
1
2.2; Q
2
2.9; Q
3
3.5).
1.3.4 Self-Driving Cars
The development of autonomous driving is already very advanced, and very likely,
self-driving cars will soon be a normal part of the city streetscape. Accidents with
such cars during practical tests typically get substantial media attention around the
world. That may explain why people are surprisingly still quite skeptical about this
technology. We phrased two survey questions that way: It is expected that self-
driving cars will take part in road trafc in the future. Will you be able to trust that
the technology is reliable?In Fig. 1.4, this item is labeled C: reliable.Here, too,
we observe a mean response below possiblyand lower/upper bounds of the middle
50% of responses that include probably notbut exclude quite probable
(Q
1
¼1.8; Q
2
¼2.7; Q
3
¼3.7). At least, the respondents trust in the ethical
programming involved, insofar as they trust the safety rstaspect. In Fig. 1.4,
this question is labeled C: safetyrst. We asked: Will you be able to trust that self-
driving cars will be programmed to put the safety of road users rst?In this regard,
the mean response between possiblyand quite probable,with the middle 50% of
responses excluding probably notand including quite probable(Q
1
¼2.4;
Q
2
3.5; Q
3
4.2).
1.3.5 Patterns of Trust and Anticipated Use of Robotic
Assistance
Do the indicator variables of trust in robotic assistance and its anticipated use
constitute one single basic orientation toward AI and robots that proves invariant
across use cases, functions, and contexts? Or should we assume two more or less
correlated basic orientations: on the one hand, trust, and on the other, acceptance? Or
do people judge this technology in a more differentiated, context-dependent manner,
according to the functions and tasks to be fullled?
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 15
Fig. 1.5 Latent correlations
between the factors
described in Table 1.5
The conrmatory factor analysis (CFA) detailed in the appendix was carried out
to answer these questions. It shows that the assumption of a more differentiated
structure achieves the best t of model and data (Table 1.10). Figure 1.5 reports the
correlations among the seven factors of trust and anticipated use of robotic assistance
that correspond to this latter model. The seven factors involved in this correlation
matrix rest on 19 indicator variables, most of which this chapter introduced earlier.
The appendix details how these variables constitute the factors in Table 1.11, along
with a documentation of question wording and factor loadings.
Respondents who can imagine involving robot assistants in their own care or the
care of close relatives can also imagine communicating with robots that specialize in
this at home. These ideas are closely related; we observe the relationship at the
highest correlation (r¼0.66). Conversely, this means that without a willingness to
communicate with robot assistants, there is no willingness to involve robots in ones
care. Noticeably, talk and drive also correlate very strongly (r¼0.64). Pointedly
overstated, anyone who can imagine communicating with a robot at home also has
condence in the technology of self-driving cars, and vice versa. This close rela-
tionship between the belief that autonomous driving is reliable and safe for humans
and the anticipated readiness to communicate with robots at home might indicate not
only the particularly important role of communication in both AI use elds but also
the expectation that assistant robots at home should be as competent as autono-
mously acting AI.
A third correlation greater than 0.5 concerns advise and decide (r¼0.57); that is,
the condence in competent robotic advice in an important eld (e.g., legal advice)
and the readiness for getting robotic advice in decision-making. At the same time,
decide correlates least with choose (r¼0.17)that is, the belief that automated
preselection would protect job applicants from unfair selection and discrimination.
This weak relationship is interesting, insofar as it concerns technology capabilities,
on the one hand and, on the other, interests behind special technology applications.
Stated otherwise, highly capable technologies can also be used in the pursuit of
interests that people can evaluate quite differently in normative (political and ethical)
terms. The perceived performance of a technology is one thing; the perceived
integrity of its application is another. Here, both represent widely independent
assessment dimensions that require separate consideration.
16 M. Beetz et al.
Table 1.5 The seven factors of trust and anticipated use of robotic assistance
Degree of belief ...
Trust
Advise ... that a robot would provide competent legal advice
Safe ... that robots and AI are safe for humans, trustworthy, and reliable
Choose ...that an automated preselection of job applicants would protect from unfair selection
and discrimination
Drive ...that self-driving cars will be reliable and programmed to put the safety of road users
rst
Anticipated use
Decide ... to use an app for smartphones that can advise people making decisions
Care ... to consent to the participation of an assistant robot in ones care
Talk ...to have conversations with specially trained robots and be kept company by them at
home
1.4 Accepting Robotic Assistance and Talking with Robots
In addition to the latent factors and their indicator variables, the present conrmatory
factor analysis includes imagining getting help with household chores. This
observed variable is regressed on the two latent factors talk and care. While talks
estimate of effect proves statistically signicant:
btalk ¼0:657; b=s:e¼5:81; βtalk ¼0:552:
cares estimate of effect approaches such two-tailed signicance only
approximately:
bcare ¼0:169; b=s:e¼1:76; βtalk ¼0:166:
If this were a linear regression, bwould indicate the expected change in the target
yfor a unit change in x
1
(while holding x
2
constant at the same time). However, in the
present case, the ordinal scale measures each of the modelsobserved variables
(1 ¼not at all, ...,5¼quite certain) used throughout this chapter; thus, probit
regressions estimate all relationships between latent factors and observed variables.
Then, the estimates of effect indicate how individualsvalues on talk and care affect
the probability of y falling into specied regions on the target scale.
Figure 1.6 illustrates this for one of two latent factors, talk. In this gure, the outer
(dashed) pair of vertical lines indicate the observed minimum and maximum values
[1.8; 1.8] on the latent talk scale, while the inner (dotted) pair of vertical lines
[0.6; 0.5] indicate the rst and third quartile on this scale of factor scores.
Viewed from left to right, the graphs show the curvilinear course of the proba-
bilities that the answers given on the ordinal y scale are
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 17
Fig. 1.6 Estimated probabilities of the respondent imagining getting help with household chores as
a function of personal acceptance of conversations with appropriately trained assistant robots
less than or equal to 1 (not at all),
in the range of 1 <y2 (greater than not at all,including probably not),
in the range of 2 <y3 (greater than probably not,including possibly),
in the range of 3 <y4 (greater than possibly,including quite probable),
greater than 4 (greater than quite probable).
With increasing talk values (i.e., with stronger beliefs in ones accepting conver-
sations with specially trained robots and being kept company by them at home), the
probability curves behave as expected: They fall for not at alland they consistently
rise for quite probable.The probabilities between these extremes also develop
consistently. In this regard, the graphs in Fig. 1.6 show how the turning point from
increasing to decreasing probability values shifts from left to right, depending on
whether the probability is considered for smaller or larger values of observed y.
1.5 Technical Innovation, Religion, and Human Values
and the Tried and Tested as Elements of the Individual
Self-Image
AI and robots represent future technologies. Therefore, assuming that people who
are open-minded toward technical innovations will more likely accept them than
people who tend to rely on the tried and tested is reasonable. In addition, we assume
greater acceptance of robotic assistance among people more oriented toward science
than religion, regarding life issues.
18 M. Beetz et al.
The conrmatory factor analysis that Table 1.12 reports is used to compute factor
scores on these three dimensions of self-image for each respondent. As expected, the
people tend either to be open to technical innovations or to rely on the tried and
tested (factor correlation ¼0.43). The personal proximity/distance to the market
and fashion of technical achievements also plays a role in this contrast.
Conversely, the orientation toward science versus religion in lifeas a third
dimensioncontributes only a partial contrast to the overall picture. On the one
hand, this orientation proves to be independent of the openness to technical innova-
tions (0.01
ns
); on the other hand, it correlates negatively with the orientation toward
human values and the tried and tested (0.39). Regardless of their openness to
technical innovations, concerning life issues, people accordingly tend to orient
themselves more toward science than human values and religion.
Table 1.6 shows how these dimensions of the individual self-image correlate with
the dimensions of trust in AI and robots and their anticipated use. Table 1.6 shows
particularly the correlations between the respective scales of factor scores, revealing
a clear pattern in this regard. Except for the statistically insignicant relation to
choose, openness to technical innovation is consistently associated with positive
correlations whileagain, except for the statistically insignicant relation to choose
and here also to safethe orientation toward human values and the tried and tested is
consistently associated with negative correlations. Therefore, whether someone is
open to technical innovations and wants to be among the rst to try them out or, on
the contrary, relies more on human values and the tried and tested and less on the
acquisition of technical achievements, makes a difference.
In terms of statistically signicant correlations, the third dimension of self-image
is not quite as effective. Those who orient themselves more toward science than
religion when it comes to life issues trust competent legal advice by a robot more and
would also tend to accept the participation of an assistant robot in ones care. Such an
orientation also favors the imagining of feeling comfortable with anticipated situa-
tions of humanrobot interaction.
Table 1.6 Individual self-image and the anticipated use of/trust in AI. Pearson correlations
between factor scores obtained from ordinal probit regression
Open to
technical
innovation
Oriented more toward science than
religion when it comes to life issues
Relies rather on human
values and the tried and
tested
Feel
good
0.43 0.15 0.38
Drive 0.42 0.05 0.22
Talk 0.39 0.04 0.38
Care 0.25 0.26 0.35
Decide 0.19 0.09 0.27
Safe 0.18 0.11 0.07
Choose 0.13 0.01 0.01
Advise 0.16 0.22 0.33
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 19
Fig. 1.7 Willingness to be supported by a robot at home, open-mindedness, and AI feel-good
factor
1.6 Feeling at Ease with Imagined Situations of Human
Robot Interaction
Whether AI applications will be accepted for the future depends crucially on the
feelings they trigger in people today. Because the applications do not yet exist in
peopleseveryday lives, they lack personal experience from which they could form
attitudes toward AI and robots. Instead, judgments today depend on people imagin-
ing what they may face in this regard in the future. Therefore, we asked the
respondents how uncomfortable or comfortable they would feel in eight ctitious
situations in which humans interact with robots and, via a conrmatory factor
analysis detailed elsewhere. Engel and Dahlhaus (2022, p. 360, Table 20.A4)
found that these assessments constitute a single factor. Figure 1.7 plots this feel-
good factor against the open-mindedness toward technological innovation. The
scattergram also distinguishes the respondentswillingness to get robotic help with
household chores, which this chapter describes earlier, and reveals two major
relationships: rst, the stronger this open-mindedness is, the stronger the feel-good
scores are; and second, the higher willingness scores cluster in the upper-right region
of the scatterplot and the lower willingness scores in its lower-left region. This
expresses all three variables correlating strongly and positively with each other and
conrms an equivalent result regarding another target variable, the willingness to
seek AI-driven decision support.
2
We regard the AI feel-good factor as a mechanism by which open-mindedness
toward technical innovation leads to anticipated AI use. Formally, it is an interven-
ing variable. A simple test can prove if open-mindedness about technological
2
Available at https://github.com/viewsandinsights/AI
¼
innovation, via this anticipated feeling comfortable with imagined situations of
humanrobot interaction, results in the willingness to accept such robotic assistance
at home. Regarding the effect of open-mindedness (x) on accepting this assistance
(y), a probit regression yields a statistically signicant estimate of the effect:
20 M. Beetz et al.
byx ¼0:34; b=s:e¼2:65; βyx ¼0:24; R2¼0:059:
This direct effect would have to become zero if the feel-good factor were included
in the model as a presumably intervening variable. This is exactly what is happening
here. If we extend the model by this factor, the direct effect drops to zero
byxjz¼0:01; b=s:e¼0:10:
while we observe at the same time two statistically signicant estimates of effect, a
rst (linear regression) effect for the relation of open-mindedness (x) toward feel-
good (z)
bzx ¼0:33; b=s:e¼5:36; βzx ¼0:43; R2¼0:186
and a second (probit regression) effect for the relation of feel-good (z) toward
acceptance ( y)
byz ¼1:03; b=s:e¼8:34; βyz ¼0:56:
yielding an explained variance of R
2
0.31.
1.7 Trustworthiness and Well-Being in the Context
of Robotic Assistance
Germans largely have a positive image of articial intelligence and robots, but they
trust this technology to a signicantly lower extent. This involves trust in both the
technology and the integrity of its applications. The closer AI gets to humans, the
more the population questions its acceptance. We observe great acceptance of AI in
space exploration and deep-sea research, and at the same time, we observe substan-
tial reservations about its use in peoples daily lives. This represents a great chal-
lenge for the development of systems of robotic assistance for everyday life and the
care of people. However, because large parts of the population have a positive image
of AI, there exists a fair potential to convince people (always well-founded) of the
trustworthiness of this technology. Following the patterns of trust we describe above,
such persuasion campaigns could aim toward specic elements of trust, such as trust
in the capability, safety, and ethical adequacy of AI and robotic assistance.
In any case, the further development of AI applications should take peoples
ideas, needs, hopes, and fears into account. From the analysis above, for example,
¼
¼
we can learn that the population is critical of communicating with robots in the
domestic context. But we also learn that the readiness to let robots assist in ones care
depends largely on this imagined willingness to talk with robots. Furthermore,
respondents assign the ability to talk to someone in need of care only a very
subordinate role in the qualication prole of a care robot, as Chap. 6shows.
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 21
This requires much persuasion in other respects as well. People judge scenarios of
future humanrobot interactions based on the emotionally charged ideas that such
scenarios trigger in them today. In fact, without primary experience, one can only
imagine what such an imaginary situation would be like. The point is just that these
beliefs affect the anticipated willingness to use robotic assistance, regardless of how
well-founded or unfounded. Therefore, conveying a reliable basis of experience and
relying on maximum of transparency in all relevant respects regarding the further
development of robotic assistance appear very useful.
Appendix
Table 1.7 Imagination of talking with a robot: interpolated quartiles of survey-weighted frequency
distributions
Q
1
Q
2
Q
3
Each scale: 1 ¼not at all, 2 ¼probably not, 3 ¼possibly, 4 ¼quite probable,
5 quite certain
Could you imagine conversational situations in which a robot that specializes in
conversations will later keep you company at home?
1.5 2.3 3.2
What kind of conversations could you imagine?
Trivial everyday conversations 1.1 2.0 3.1
Conversations in case you ever feel lonely 1.1 1.9 3.0
Conversations in old age, if you are no longer so mobile, can no longer easily
socialize with people
1.4 2.7 3.4
In case you should ever need advice on life issues 1.2 2.1 3.1
Convivial discussions with the family, in which a robot also takes part 1.0 1.4 2.3
Could you imagine a robot helping you with household chores? 2.2 3.2 4.3
Consent to the participation of an assistant robot in the care of a close relative 2.3 3.1 3.8
Consent to the involvement of a robot assistant in ones own care 2.4 3.2 4.0
Table 1.8 Robots as pets: Interpolated quartiles of survey-weighted frequency distributions
Q
1
Q
2
Q
3
Each scale: 1 ¼not at all, 2 ¼probably not, 3 ¼possibly, 4 ¼quite probable,
5 quite certain
Imagine if robots were programmed to keep people company, and robots were
made to look and move like a pet, such as a dog or a cat. What do you think
about it? Could you imagine keeping a robot as a pet in your home?
1.1 1.7 2.5
Occasionally one hears that for human pets are part of the family as if they were
humans themselves. Even if, unlike animals, robots are not living beings, but
machines: What would your assumption be, could robots later also fare in the
same way as domestic animals do today? So that they too could belong to the
family one day?
1.6 2.5 3.4
¼
(continued)
22 M. Beetz et al.
Table 1.9 Preselection of job applicants and belief that only the qualication counts
Preference for mode of preselection of job applicants in
row percentages
Only the applicants qualication
counts Automated
By
people
No
matter
Dont
know N
Not at all 3.9 90.2 3.9 2.0 51
Probably not 6.0 80.0 4.0 10.0 50
Possibly
a
30.2 49.1 17.0 3.8 53
Quite probable/certain 41.1 32.1 19.6 7.1 56
a
Incl. dont know.A related graph is available at https://github.com/viewsandinsights/AI
Table 1.10 Goodness of t of three related models of trust and anticipated use of robotic assistance
Model
Robust
Chi
2
df p CFI TLI RMSEA SRMR
1-factor model: Acceptance and trust
collapsed to one factor
887.6 170 0.00 0.82 0.80 0.155 0.178
2-factors model: Acceptance vs. trust 588.98 169 0.00 0.89 0.88 0.119 0.151
7-factor model as reported in
Table 1.11 below
164.06 146 0.15 1.0 0.99 0.027 0.049
Table 1.11 Trust in robotic assistance and its anticipated use: CFA factor loadings
Loadings
Each scale: 1 ¼not at all, 2 ¼probably not, 3 ¼possibly, 4 ¼quite probable,
5 quite certain
HUMANROBOT COMMUNICATION AT HOME Talk
Digital voice assistants are already being used in some private households to answer
simple questions to humans. Please imagine if such technical assistants were further
developed in such a way that a person can hold conversations with them in the same
way that people talk to one another: Could you imagine conversational situations in
which a robot that specializes in conversations will later keep you company at home?
0.84
What kind of conversations could you imagine?
Trivial everyday conversations 0.75
Conversations in case you ever feel lonely 0.93
Conversations in old age, if you are no longer so mobile, can no longer easily
socialize with people
0.94
In case you should ever need advice on life issues 0.76
Convivial discussions with the family, in which a robot also takes part 0.75
ASSISTANT ROBOTS IN THE CASE OF NEED FOR CARE Care
Assuming that an assistant robot wouldlater onbe able to carry out its tasks
competently, reliably, and without errors: If you think about your personal environ-
ment: Assume that a close relative of yours would need careAnd you would be
asked for consent to the participation of an assistant robot in the care of this relative.
Would you agree?
0.98
Assuming again that an assistant robot wouldlater onbe able to carry out its
tasks competently, reliably, and without errors:
0.97
How about yourself: Let us assume that you yourself would one day be in need of
care. Would you agree to the involvement of a robot assistant in your own care?
1 Trustworthiness and Well-Being: The Ethical, Legal, and Social... 23
Table 1.11 (continued)
Loadings
AI-DRIVEN ADVICE IN THE CASE OF DECISIONS* Decide
What if there were an app for smartphones that can advise people at home or on the
go in everyday situations: Would you call in such a personal advisor for decisions that
you have to make in everyday life?
0.95
And what if there were an app for smartphones that can advise people in important
life situations: Would you call in such a personal advisor for important decisions?
0.92
AUTONOMOUS DRIVING Drive
It is expected that self-driving cars will take part in road trafc in the future.
Will you be able to trust that the technology is reliable?
0.88
Will you be able to trust that self-driving cars will be programmed to put the safety
of road users rst?
0.95
AI PROTECTS AGAINST DISCRIMINATION* Choose
Imagine again, in large companies, the preselection under applications for vacancies
would be made automatically by intelligent software.
Would you trust that such a preselection would effectively protect applicants from
unfair selection or discrimination?
0.91
Imagine again, in large companies, the preselection under applications for vacancies
would be made automatically by intelligent software.
Would you trust that such a preselection would protect applicants more effectively
from unfair selection and discrimination than a human preselection?
0.91
AI IS SAFE FOR HUMANS* Safe
Robots and articial intelligence are reliable (error-free) technologies 0.75
Robots and articial intelligence are technologies that are safe for humans. 0.98
Robots and articial intelligence are trustworthy technologies 0.82
AI ADVISES COMPETENTLY/TRUSTFULLY Advise
Please imagine that you need legal advice and that you contact a law rm on the
internet. There a robot takes over the initial consultation. Would you trust that he can
advise you competently?
0.88
Please imagine that you are looking for a comparison portal on the internet in order
to buy a product or service there. Would you trust that the algorithm would show you
the best comparison options in each case?
0.51
N¼177. Displayed are standardized factor loadings. All factor loading prove statistically highly
signicant. The CFA treats all scales as 5 pt ordinal scales using probit regression. Survey weights
are employed to handle unit nonresponse. The CFA attains a very acceptable goodness of t: Robust
Chi
2
¼164.06, df ¼146, p¼0.15; CFI ¼1.0/TLI ¼0.99, RMSEA ¼0.027; SRMR ¼0.049.
Because the frequency distributions involve minor percentages of dont knowresponses, these
dont knowresponses were recoded to the mid category possibly,acting on the auxiliary
assumption that both categories equivalently express maximal uncertainty. CFA computed using R
package Lavaan.The factors decide,choose, and safe are also part of a similar CFA reported in
Engel and Dahlhaus (2022, p. 359)
Table 1.12 Self-image: CFA factor loadings
Q
1
Q
2
Q
3
Each item response scale is coded as 1 ¼not at all, 2 ¼probably
not, 3 possibly, 4 quite probable, 5 quite certain
¼ ¼ ¼
Interpolated
quartiles
Loadings
Would you describe yourself as a person ...
SELF-IMAGE: OPEN-
MINDED*
... who is open-minded toward technical innovations? 3.5 4.3 4.9 0.88
... who likes to be counted among the rst to try out technical
innovations?
1.7 2.3 3.5 0.72
... who keeps up with the times? 3.0 3.7 4.3 0.58
SCIENCE vs RELIGION IN PERSONAL LIFE SCIENCE vs. RELIGION
... who is more oriented toward science than religion when it
comes to personal life issues?
3.4 4.3 4.9 0.80
... who is religious? 1.1 1.8 3.2 0.64
HUMAN VALUES AND
THE TRIED AND
TESTED
... who relies on the tried and tested rst and foremost? 2.7 3.4 4.2 0.70
... who does not have to go along every fashion? 3.7 4.3 4.9 0.38
... for whom life is rst and foremost about human values, not
technical achievements?
4.0 4.7 5.1 0.59
N¼189. Displayed are standardized factor loadings. The CFA treats all scales as 5 pt ordinal scales
using probit regression. Survey weights are employed to handle unit nonresponse (GOF: Robust
Chi
2
¼39.20, df
computation of interpolated
¼15, p¼0.001; CFI ¼0.91/TLI ¼0.84, RMSEA ¼0.093; SRMR ¼0.090). The
quartiles is based on weighted frequency distributions too. Because the
frequency distributions involve minor percentages of dont knowresponses, these dont know
responses were recoded to the mid category possibly,acting