Content uploaded by Raya Jones
Author content
All content in this area was uploaded by Raya Jones on Jun 25, 2018
Content may be subject to copyright.
information
Article
Engineering Cheerful Robots: An
Ethical Consideration
Raya A. Jones
School of Social Sciences, Cardiff University, Cardiff, CF10 3AT, UK; JonesRA9@cardiff.ac.uk;
Tel.: +44-029-2087-5350
Received: 16 May 2018; Accepted: 22 June 2018; Published: 24 June 2018
Abstract:
Socially interactive robots in a variety of forms and function are quickly becoming
part of everyday life and bring with them a host of applied ethical issues. This paper concerns
meta-ethical implications at the interface among robotics, ethics, psychology, and the social sciences.
While guidelines for the ethical design and use of robots are necessary and urgent, meeting this
exigency opens up the issue of whose values and vision of the ideal society inform public policies.
The paper is organized as a sequence of questions: Can robots be agents of cultural transmission?
Is a cultural shift an issue for roboethics? Should roboethics be an instrument of (political) social
engineering? How could biases of the technological imagination be avoided? Does technological
determinism compromise the possibility of moral action? The answers to these questions are not
straightforwardly affirmative or negative, but their contemplation leads to heeding C. Wright Mills’
metaphor of the cheerful robot.
Keywords: social robots; ethics; cultural shift; technological determinism; child–robot interaction
1. Introduction
We inhabit a world in which ‘social’ gadgets cheerfully interact with humans. This paper’s title,
however, alludes also to the metaphorical sense in which sociologist C. Wright Mills spoke of robots in
the 1950s: ‘We know of course that man can be turned into a robot
. . .
But can he be made to want to
become a cheerful and willing robot?’ [
1
] (p. 171). His metaphor denotes individuals who passively
accept their social position, content with their allotted niche, for they are incapable of questioning
the normative order. ‘The ultimate problem of freedom is the problem of the cheerful robot,’ stated
Mills—for this phenomenon implies that not everyone wishes to be free—and considered the likelihood
that the human mind ‘might be deteriorating in quality and cultural level, and yet not many would
notice it because of the overwhelming accumulation of technological gadgets’ [1] (p. 175).
A characterization of life in the 1950s as an overwhelming accumulation of gadgets may bring
a smile in the 2010s, but the issues raised by Mills remain pertinent, if not more urgent, in this
era of unprecedented acceleration of new technologies that are transforming not only our lifestyles
but also our self-understanding and possibly human nature itself. We are told that a technological
singularity—when as a species we will either transcend our biology (to paraphrase Kurzweil [
2
]) or
become extinct—is imminent. Across the academia, scholars engaged with discourses of posthumanism
and transhumanism comment on how ‘the posthuman view configures human being so that it can
be seamlessly articulated with intelligent machines’ [
3
] (p. 3). Meanwhile, the new technologies pose
new social, political and ethical challenges. Announcing the birth of roboethics in 2005, Veruggio
provoked his audience to consider whether ethical issues with respect to robots should remain a matter
for stakeholders’ own consciences or be construed as ‘a social problem to be addressed at institutional
level’ [
4
] (p. 2). In a follow-up paper [
5
], averring that soon ‘humanity will coexist with the first alien
intelligence we have ever come into contact with—robots’ (p. 5), Veruggio articulated a roadmap for
Information 2018,9, 152; doi:10.3390/info9070152 www.mdpi.com/journal/information
Information 2018,9, 152 2 of 11
roboethics with the caveat that its target is ‘not the robot and its artificial ethics, but the human ethics
of the robots’ designers, manufacturers and users’ (p. 7). Since 2005, the march of robots into our midst
has been increasingly recognized as a social problem to be addressed at the institutional level.
This opens up the axiological issue of whose values and vision of the ideal society inform
public policies. The empirical question can be answered by observing the increasing dominance
of technology-led positions (but should this vision determine ethics?). The rise of ‘robot culture’
is a phenomenon of social scientific interest, but should this phenomenon, or some aspects of it,
be construed as deserving an ethical consideration? The answer is not straightforwardly affirmative or
negative, and this paper is not aimed at arriving at a categorical answer. The following is organized as
a sequence of questions that signpost a few salient issues that emerge at the interface among robotics,
ethics, psychology, and the social sciences.
2. Can Robots Be Agents of Cultural Transmission?
The concept of cultural transmission originated in sociobiology, in which context it is distinguished
from genetic transmission of traits. In humans, it denotes socialization and enculturation processes
whereby beliefs, values, and norms of conduct are transmitted across and within generations [
6
].
At the level of interpersonal interactions, especially within the family, cultural transmission occurs
when adults impart their own values, beliefs, and attitudes to children (‘direct vertical’ transmission).
Cultural transmission occurs also within the peer group (‘direct horizontal’). The process operates
at the societal level without direct interpersonal interaction (‘oblique’ transmission); for instance,
when mass media and popular culture induce imitation and learning.
While cultural transmission is a universal process, the mechanisms and contents involved in
the process are not necessarily universal, since childrearing practices and normative expectations
vary across cultures. Such differences can already be seen in infancy. In a cross-cultural study that
investigated infant behavioral inhibition, Australian, Canadian, Chinese, Italian, and South Korean
toddlers were presented with a toy robot that moved, made noises, and emitted smoke [
7
]. Toddlers
from Western cultures (especially Italian and Australian) were quicker to touch the robot than their
counterparts from Eastern cultures, with Chinese and South Korean toddlers being the shiest (many of
whom did not touch the robot). Towards an explanation, the researchers speculated that Asian parents
tend to reward cautious and reserved behavior in their children.
The significance of the robot in [
7
] was the fact that it was an unfamiliar toy introduced by
a stranger; i.e., not necessarily its appearance as a robot. Real robots increasingly enter environments
of child development in a variety of forms and functions. Examples of direct vertical transmission can
be glimpsed in reports from a longitudinal study in the University of California San Diego, which has
involved placing humanoid robots in a crèche. When QRIO (a bipedal robot created by Sony) was
first introduced, some toddlers cried when it fell [
8
]. The investigators advised the teachers to tell the
children not to worry since the robot could not be damaged, but the teachers, ignoring the advice,
‘taught the children to be careful; otherwise, children could learn that it is acceptable to push each
other down’ [
8
] (p. 17956). Later on, children seldom cried when QRIO fell, but instead helped it
to stand up. Separately, an ethnographic study of the same project described how a teacher seized
the opportunity to foster the etiquette of saying ‘Thank you’ when a toddler spontaneously offered
a toy to RUBI (a plump robot, clad in yellow cloth, with a head and arms, created for the project) [
9
].
There were likely opportunities also for horizontal transmission. Supplementary videos for [
8
] include
a clip (movie 5) that shows QRIO suddenly falling over, and children rushing over, and one boy
persistently tries to raise the robot; other children observed, and may potentially imitate their peer’s
helping behavior (see also an analysis of the episode in [10] pp. 181–182).
In the above examples, the robot served as a fulcrum for human-human interactions within
which cultural transmission took place, but it did not function as a socializing agent in its own
right. Robot Tega exemplifies an effort to build a robot that could ‘socialize’ children into doing their
homework [
11
,
12
]. Arguably, an advantage of educational robots is that, as an intelligent tutoring
Information 2018,9, 152 3 of 11
system, the robot can customize its tutoring to suit individuals’ pace and style of learning (at least
when it works smoothly; see [
13
] on breakdowns in child–robot interactions). The creators of Tega have
gone a step further in taking into account the fact that emotional states can affect a child’s motivation.
Interacting with an enthusiastic cartoon-like robot can make learning fun, and encourage children
to try harder. Tega was successfully tested with 3–5-year-old English-speaking children learning
Spanish [11].
If something helps to improve learning, it makes pedagogic common-sense to use it, but curricular
learning (such as mastering a foreign language) should not be confused for socialization. Children’s
long-term exposure to robots could have unintended consequences. This concern is insinuated in the
heading of the New Scientist report on Tega (a new platform designed by Personal Robots Group at
MIT Media Lab), ‘Kids can pick up attitude from robots they play and learn with’ [
12
]. The thread is
followed in an MIT Technology Review article [
14
] raising concerns about what might happen when
robots become role models for children. In a convergent vein, a blog article [
15
] claims that ‘parents
are worried that Amazon Echo is conditioning their kids to be rude’. At present, only a minority of
children experience interactions with robots such as Tega, but ‘smart’ gadgets are increasingly part of
the home environment. Unlike educationally assistive robots, gadgets such as Amazon’s Echo do not
require the child to learn new skills. The gadget is ‘child-friendly’ only because of the impoverishment
of the interaction. The functional reduction of human dialogue does away with courtesies such as
saying ‘please’, and rewards brusque interaction style—an outcome that could frustrate parents trying
to instill good manners in their children [15].
Currently, any evidence for that effect is at best anecdotal. Nevertheless, this speculative instance
evinces a theoretical distinction between cultural transmission of behavioral norms (e.g., parents
teaching their children not to be rude) and a change at societal level, such as a cultural shift in what
people consider as rudeness. For better or worse, new affordances are created as gadgets are becoming
both more sophisticated and affordable. In contrast with the worries expressed in [
15
], a leading
headteacher in Britain has recently suggested that Alexa or Siri-type virtual assistants could help
timid children become more confident in lessons: ‘Children can be reluctant to put their hands up
and answer questions in class, especially if they think they might be ridiculed. That won’t come from
a machine.’ [
16
]. It could be argued that helping timid children overcome their shyness in the classroom
could give them a better foundation for life than providing them with technological crutches.
The specific ethical issue arising at this juncture pertains to ameliorative responsibility; that is,
‘an obligation to improve a situation, no matter whether one is the causally responsible for
it’ [
17
] (p. 110). People may agree about this obligation in principle, but opinions are polarized
as to whether using robots will improve or worsen given situations. In general, the answer to whether
robots can be agents of cultural transmission is affirmative, but we cannot assume that any direct
transmission by means of robots would have the intended effect (or only that specific effect) on
developmental and learning outcomes. Furthermore, as can be observed in the case of migrant families,
the transmission of values from parents to offspring might be less effective in the host country insofar
as children might be reluctant to accept the parents’ tradition whilst parents may hesitate to impose
attitudes that might be nonadaptive in the new environment [
18
]. A similar ‘generation gap’ might
exist between adults and children or youth, as digital migrants and digital natives respectively (cf. [
19
]),
with the qualification that (unlike migrants to an existing society) the digital world is rapidly evolving
ahead of all of us, old and young.
3. Is a Cultural Shift an Issue for Roboethics?
Describing cultural shifts in highly industrialized societies in the 1980s, Inglehart proposed that
a change in values is mostly an automatic consequence of increased prosperity [
20
]. He urged attention
to ‘substantial and enduring cross-cultural differences in certain basic attitudes and habits,’ differences
that are stable but not immutable, and are susceptible to gradual changes that are traceable to specific
causes [
20
] (p. 22). He further commented that changes due to industrialization may interact differently
Information 2018,9, 152 4 of 11
with religion, as a political factor, in the Confucian-influenced Far East, the Islamic world, and Catholic
countries. Similar assertions could be extended to the technologized societies of the 2010s.
The existence of cross-cultural differences in attitudes to robots is well documented. For example,
a 2012 Eurobarometer survey [
21
] in 24 European countries revealed considerable cross-national
differences, notably in public objections to using robots in the care of children, the elderly and the
disabled; negative attitudes were strongest in Cyprus (85%) and weakest in Portugal (35%). A 2016
survey of attitudes to robots in healthcare [
22
] in 12 countries across Europe, the Middle East and
Africa found that the British sample on the whole was least receptive to the idea of healthcare robots.
However, 55% of 18- to 24-year-old Britons were receptive to the idea, in contrast with only 33% of
older Britons. As technological realities change, attitudes to robots change across generations. Whereas
in the early 1980s an Arab journalist reportedly described the creation of androids as a travesty against
Allah (cited in [
23
]), in October 2017, Saudi Arabia granted citizenship to a female-looking robot [
24
].
This gesture might well be a publicity stunt, but nonetheless it indicates the possibility of shifts in
acceptance of humanlike artefacts among Muslims.
At the level of the individual person, cultural shifts translate into developmental outcomes
through an interplay of proximal and distal processes. Bronfenbrenner’s bioecological paradigm [
25
]
and his earlier ecological systems model [
26
] describe human development as happening within
hierarchically nested systems. Proximal processes are the ‘progressively more complex reciprocal
interaction’ of a child with the people, objects, and symbolic resources that constitute the child’s
immediate environment (microsystem)—an interaction that ‘must occur on a fairly regular basis over
extended periods of time’ in order to be effective [
25
] (p. 620). By implication, robots can play a role
in proximal processes only when they enter the child’s world on a regular basis [
27
]. Furthermore,
children’s everyday contact with robots is likely to occur within family and school settings already
replete with hi-tech, settings that reflect adults’ beliefs about the technologies they make available
to the child. Adults’ beliefs are formed against the backdrop of the particular society’s characteristic
belief systems, resources, hazards, life styles, life-course options, patterns of social interchange, and
so forth (the macrosystem). Bronfenbrenner’s model thus posits distal processes that impact, top-down,
proximal processes. This treats ‘culture’ as if it were operating externally to everyday activities within
microsystems. Recent revisions (e.g., [
28
]) tend to integrate Bronfenbrenner’s bioecological paradigm
with Vygotskian and neo-Vygotskian approaches, sometimes under the label ‘ecocultural’. Endorsing
Bronfenbrenner’s view of the human being as ‘a growing, dynamic entity that progressively moves
into and restructures the milieu in which it resides’ [26] (p. 21), bioecological and ecocultural models
generally describe processes that shape the person one becomes.
A technology-related cultural shift may manifest in a variety of ways. For instance, by age 4,
most children categorize prototypical living and non-living kinds, and typically designate robots
to the inanimate category; but findings that children tend to attribute aliveness to robot pets
with which they interact may indicate the emergence of a new ontological category that disrupts
current animate/inanimate distinctions [
29
–
32
]. Commentators may comment on the desirability
(or otherwise) of inevitable consequences of a technologized social reality. In this vein, Turkle opines
that disembodied interpersonal interactions through social media, mobile phones and the internet,
have led to the emergence of a new state of selfhood—human subjects wired into social existence
through technology—at the cost of youth’s capacity for authentic relationships [
33
]. As a consequence,
society has arrived at a ‘robotic moment’, a situation marked by readiness to accept robots as
relationship partners, according to Turkle.
If a cultural shift is inevitable, ethical appraisals may at best provide pragmatic agendas for
minimizing risks. However, even modest agendas of limited application are imbued with their authors’
notions of the kind of society we want to live in, and are underpinned by the belief that it is possible to
influence the direction of societal change.
Information 2018,9, 152 5 of 11
4. Should Roboethics Be an Instrument of (Political) Social Engineering?
The term ‘social engineering’ has two meanings. Recently it has entered the field of computer
and information security as an umbrella term for a variety of techniques that are used to manipulate
people into divulging confidential information (e.g., deception by phone) or compromise people’s
security and privacy in cyberspace (e.g., phishing emails) [
34
–
36
]. Social engineering in this sense
is clearly relevant here since robots can be hacked for criminal or malicious purposes. The question
raised in this section, however, refers to the older and more general sense of the term. As used chiefly
in political science and sociology, social engineering denotes any planned attempt by governing bodies
to manage social change and in this way to regulate the future of a society.
The first occurrence of the analogy between engineers and policymakers is traceable to
an 1842 book by the British socialist economist John Gray [
37
]. Gray contrasted a situation in which
a steam engine malfunctions with the situation in which some social or economic problem requires
remedy. If several engineers were separately to examine the malfunctioning steam engine, they likely
would arrive at similar conclusions about the problem and how to fix it; but in the political arena
there is little agreement among separate committees regarding the nature of the problem, its cause and
remedy: ‘the political and social engineers of the present day
. . .
seem to agree in nothing, except that
evils do exist’ [
37
] (p. 117). A similar observation could be made about the present-day proliferation
of advisory bodies and initiatives that produce guidelines for ethical design and use of artificial
intelligence (AI) and robots.
At the close of the nineteenth century, the metaphor acquired positive connotations of public
service, defining social engineers as specialists appointed to handle problems of human or social
nature. For instance, the American Christian sociologist Edwin Earp introduced his 1911 book (titled
The Social Engineer) with the claim, ‘Social engineering means not merely charities and philanthropies
that care for victims of vice and poverty, but also intelligent organized effort to eliminate the cause that
make these philanthropies necessary’ [
38
] (p. xv). He further defined social engineering as ‘the art
of making social machinery move with the least friction and with the best result in work done.’ [
38
]
(p. 33). Throughout the twentieth century, the usage of the term became associated with centralized
organizations that deploy preventative and ameliorative measures towards fixing society’s ills.
Extrapolating the above usage to the field of roboethics, the would-be social engineers are experts
in a variety of fields who may be called upon to identify risks and plan ways to minimize these.
Individuals may contribute through membership in organizational sections; e.g., the Institute of
Electrical and Electronics Engineers’ (IEEE) Global Initiative on Ethics of Autonomous and Intelligent
Systems. They may participate in workshops that could inform policymaking. For example,
the principles of ethical design and use of robots outlined in [
39
] originated in a 2010 workshop,
and subsequently were incorporated into the British Standards Institution’s ‘Guide to the Ethical
Design and Application of Robots and Robotic Systems’ published in 2016 [
40
]. The spirit of the
social engineer is implicit in the mission statement of the Foundation for Responsible Robotics,
a Netherlands-based initiative with an international cast of academics. The Foundation’s mission,
as its website states, is ‘to shape a future of responsible robotics design, development, use, regulation,
and implementation’ [41].
A modicum of utopianism is perhaps inevitable in any ambition to better the future of society.
In accordance with Karl Popper’s [
42
] distinction between utopian and ‘piecemeal’ social engineering,
however, initiatives such as the aforementioned may fall under the rubric of piecemeal. In Volume I
of his political science book, first published in 1945, Popper regarded the piecemeal approach as
preferable to the utopian, for this approach tackles problems as they arise, seeking ‘a reasonable
method of improving the lot of man,’ a method that can be readily applied and ‘has so far been really
successful, at any time, and in any place’ [
43
] (p. 148). However, his recommendation to rely on
tried-and-tested methods might be difficult to implement in a world that is itself rapidly changing
due to technological advances. This challenge is insinuated in a rider to the mission statement of the
Foundation for Responsible Robotics: ‘We see both the definition of responsible robotics and the means
Information 2018,9, 152 6 of 11
of achieving it as ongoing tasks that will evolve alongside the technology’ [
41
]. Viewed pessimistically,
the possibility of pre-empting irresponsible robotics might become moot if technological innovations
constantly change the terrain at a pace and in ways that are difficult to anticipate.
A case in point is cybersecurity. Technological innovations create new affordances for social
engineering in the term’s negative meaning; ‘the social engineer is a skilled human manipulator who
preys on human vulnerabilities’ [
36
] (p. 115). This characterization could not be more diametrically
opposed to Earp’s, in whose view the ‘social engineer is one who can help the religious leader to
establish a desired working force in any field of need’ [
38
] (p. xviii). As a response to a specific ‘field of
need’, roboethics undertakes tasks of piecemeal social engineering by virtue of advising public policies.
An affirmative answer to the question of whether roboethics should contribute to the engineering of
a better society, however, presupposes a consensus about what constitutes a better society. The absence
of consensus raises the question of whose vision of the ideal society is being served.
5. How Could Biases of the Technological Imagination Be Avoided?
Social issues have been recognized as among the ‘problems’ defining the engineering field for
more than a decade. While social scientists typically investigate the impact of technologies on society
and persons, roboticists tend to ask what needs to be done to make robots desirable for society
and persons. The term ‘technological imagination’ paraphrases Mills’ definition of the sociological
imagination. The sociological imagination is a stance that construes social phenomena in terms of what
these may reveal about the workings of a society [
1
], whereas the technological imagination is a stance
predisposed towards construing social issues in terms of their implications for technology [
10
,
43
].
This is the engineering field’s default stance, understandably, since making robots is its raison d’être.
Furthermore, since it is in the manufacturers’ interest to avoid marketing products that might make
them liable to lawsuits, the industry may self-regulate in the long run. Pragmatically, ethical appraisals
pivot on assessments of risks associated with technological innovations, and policy recommendations
center on how these risks could be realistically minimized.
The focus on the technological artefact, although necessary, results in a kind of tunnel vision.
For example, in an interview with the IEEE online newsletter [
44
], the vice president of the IEEE Society
on Social Implications of Technology has identified important ethical and legal concerns related to
marketing home robots to families with young children, including information security, safety, and
safeguarding children: the gadget could be hacked, enabling strangers to watch the child; it might be
used unscrupulously to sell products to children; a robot might accidently hurt a child; and the robot
might witnesses child abuse. Nevertheless, technology-driven ethical appraisals are not child-centered,
and seldom take into consideration the possibility of detrimental effects on child development or the
wider social context (e.g., the home or the school). In contrast, psychology-driven ethical appraisals
such as outlined by Amanda and Noel Sharkey [
45
,
46
] do highlight issues of emotional attachment,
deception of the child, and loss of human contact (see also [
33
]). Apropos teachers’ attitudes to robots
in the classroom, research reported in [
47
] demonstrates the exigency of taking the consideration of
ethics beyond design issues and toward engagement with stakeholders’ views on how robots may
affect their current practices. The point made here, however, concerns biases located in one side of
a schism within the discourse of social robotics [
10
,
43
,
48
]. Representing the stance identified by the
present author as the technological imagination [
10
,
43
], the writers of [
48
] maintain that the world
is ‘run by technological developments, and that robots are here for further enhancements and new
applications’ and are critical of the opposite stance, the ‘society-driven side [which] opines that the
world is driven and run by social aspects’ (p. 107).
The technological imagination informs policies not only via a pragmatic ‘damage limitation’
approach to regulating uses of technological products, but also via a narrative of moral commitment
to improving the quality of life by means of robots. In this vein, Movellan has stressed ‘our
responsibility to explore technologies that have a good chance to change the world in a positive
manner’ [
49
] (p. 239). The claim that robots will help children to become ‘better people: stronger,
Information 2018,9, 152 7 of 11
smarter, happier, more sociable and more affective,’ as he put it in an interview with Wired [
50
],
insinuates that children who are denied robots—either because parents cannot afford the gadgets or
conscientiously refrain from giving them to their children—will grow up worse people: weaker, duller,
sadder, less sociable and less affective. The rhetoric thus places the onus on policymakers to allocate
resources to the development and promotion of educational robots.
The benefits of socially assistive robots (SAR) should not be overlooked or understated.
For example, there is robust evidence in support of robot interventions for promoting social skills
among children with autism [
51
,
52
]. A potential pitfall of technology-led morality, in this specific
instance, would be a naïve belief that providing non-autistic children with robot companions can
enhance their social skills, a belief resting on a simplistic ‘engineering logic’:
•The social skills of autistic child A are impaired.
•Intervention using robot R raises A’s skills to age-average level.
•The social skills of non-autistic child B are already at age-average level.
•Therefore, R will raise B’s skills to above-average.
However, autistic children might respond better to robots than to people because of their
symptomatic impairments (as noted in [
51
]). Non-autistic infants are innately attuned to human beings,
and children ultimately prefer people to robots (see [
27
] for a related discussion). The engineering logic
can be contrasted with a ‘psychological logic’; namely, an approach that seeks to explain phenomena
of human mind and behavior by reference to biopsychosocial factors impacting on the individual:
•
The social skills of autistic child A are impaired. Explanation: Deficits in the mirror neurons
system (which facilitates imitation and empathy).
•
Intervention using robot R raises A’s skills to age-average level. Explanation: Robots are less
complex than people are.
•
The social skills of non-autistic child B are already at age-average level. Explanation: innate
orientation to people and personal history of social interactions.
•
Therefore, since robots are less complex than people, R might be detrimental to B’s
further development.
Indeed, some psychologists investigating human-robot interaction (HRI) have expressed concerns
that children might accept robotic companionship without fostering the moral responsibilities that
human companionships entail [
53
]. Findings that children who had higher involvements with
technological artefacts were less likely to view living dogs as having a right to just treatment and to be
free of harm may signal the possibility that human adaptation to interacting with robots will ‘dilute
the “I–thou” relationship of humans to other living beings’ [54] (p. 231).
The ‘quick’ answer to the question of how to avoid biases of the technological imagination is
to widen the pool of expertise so as to encompass a spectrum of dispositions to robots as well as
knowledge. This is already done in at least some cases (advisory bodies tend to be multidisciplinary;
robots for autism are developed in collaboration with clinicians). The potential dilution of the ‘I–thou’
relationship, however, signals a deeper, longer-term problem.
6. Does Technological Determinism Compromise the Possibility of Moral Action?
Identifying technological determinism as the dominant narrative in social robotics, Šabanovi´c
commented that, in this narrative, social problems are typically construed as something in need of
technological ‘fixes’, and the users of robotic products are often treated ‘as objects of study, rather
than active subjects and participants in the construction of the future uses of robots’ [
55
] (p. 440).
This is not a peculiarity of robotics, for it reproduces the dominant mechanistic worldview of modern
psychology [
10
]. The mechanistic worldview has made it possible to translate human qualities onto
machines. As Rodney Brooks put it, ‘Humans, after all, are machines made up of organic molecules
whose interactions can all be aped (we think) by sufficiently powerful computers’ [56] (p. 86).
Information 2018,9, 152 8 of 11
There lingers the technological dream of ‘the Universal Automaton
. . .
the creation of the perfect
citizen,’ which could be augmented with an emphasis on ‘the amount of diversity it is capable of
handling’ as a benchmark in the creation of truly social robots [
57
] (p. 86). The infamous case of
Tay evinces some pitfalls of machine learning. Tay was a chatbot developed by Microsoft, targeting
18–24 year-olds in the USA [
58
]. It was launched via Twitter on 23 March, 2016, but Microsoft removed
it only 16 hours later because Tay started to post inflammatory and offensive tweets, having quickly
picked up antisemitism from the social media. Microsoft attributed it to ‘trolls’ who attacked Tay,
since the bot customized its replies to them by searching the internet for suitable source material [
59
].
From the standpoint of applied ethics, issues that immediately come to mind apropos this instance
of technology-gone-awry include the exigency of regulating AIs by means of censorship, perhaps
through installing a moral code in the machine. From the standpoint of metaethics, the case of Tay
calls into question the nature of morality itself. In the present context, the moral of the story lies in
the demonstration of an AI’s capability of handling diversity of information compounded with the
incapability of locating its own self in a space of moral actions. Like Mills’ cheerful robots—and unlike
those trolls, whose mischief was deliberate—Tay lacked freedom of thought to reason about what it
was finding on the internet.
The mechanistic worldview enables a functional reduction of the complexity of social interaction
to algorithms enacted by a machine; in effect, as [
57
] put it, minimizing the ‘human’ in HRI. However,
this squeezes out of the minimal ‘human’ the very quality that makes us human—the aspect of selfhood
that Charles Taylor regarded as ‘perennial in human life’; namely, the fact that ‘a human being exists
inescapably in a space of ethical questions; she cannot avoid assessing herself in relation to some
standards’ [
60
] (p. 58). It is not the possession of some standards, a moral code, but the capacity
(and freedom) to dialogue with these standards, that constitutes the human subject as ‘an articulate
identity defined by its position in the space of dialogical action’ [
60
] (p. 64). The existence of roboethics
indeed attests to dialogical action at both individual and collective levels.
7. Conclusions
Above the silver lining of technological progress, there is a cloud of worries about privacy, human
safety, using robots for crime, and more. Veruggio and cowriters provide a comprehensive list of
global social and ethical problems that the introduction of intelligent machines into everyday life
brings about: dual-use technology (having civilian and military applications); anthropomorphizing
lifelike machines; cognitive and affective bonds toward machines; technology addiction; digital divide
across age groups, social class, and/or world regions; fair access to technological resources; effects of
technology on the global distribution of wealth and power; and the impact on the environment [
61
]
(p. 2143). The discourse is by default oriented towards matters of applied ethics that arise from existing
technology, as well as matters arising in anticipation of futuristic robots (such as robot rights and robot
personhood). The possibility that human–robot coexistence might result in the engineering of human
subjects who, in Mills’ words, will ‘want to become a cheerful and willing robot’ [
1
] (p. 171) is not
usually flagged as an issue for roboethics. The focus remains on what technology can do for us and
shouldn’t do to us; and yet this technology might be changing us, our human nature.
Funding: This research received no external funding.
Conflicts of Interest: The author declares no conflict of interest.
References
1. Mills, C.W. The Sociological Imagination; Oxford University Press: New York, NY, USA, 1959.
2. Kurzweil, R. The Singularity is Near; Penguin Books: London, UK, 2005.
3. Hayles, K. How We Became Posthuman; University of Chicago Press: Chicago, IL, USA, 2008.
4.
Veruggio, G. The birth of roboethics. In Proceedings of the IEEE International Conference on Robotics and
Automation Workshop on Robo-Ethics, Barcelona, Spain, 18 April 2005; pp. 1–4.
Information 2018,9, 152 9 of 11
5.
Veruggio, G. The EURON roboethics roadmap. In Proceedings of the IEEE-RAS International Conference on
Humanoid Robots, Genova, Italy, 4–6 December. 2006; pp. 612–617.
6.
Bisin, A.; Thierry, V. Cultural transmission. In The New Palgrave Dictionary of Economics Online, 2nd ed.;
Durlauf, S.N., Blume, L.E., Eds.; Palgrave Macmillan: London, UK, 2008; Volume 2, pp. 177–181.
7.
Rubin, K.H.; Hemphill, S.A.; Chen, X.; Hastings, P.; Sanson, A.; Coco, A.L.; Zappulla, C.;
Chung, O.-B.; Park, S.-Y.; Doh, H.S.; et al. A cross-cultural study of behavioral inhibition in toddlers:
East–West–North–South. Int. J. Behav. Dev. 2006,30, 219–226. [CrossRef]
8.
Tanaka, F.; Cicourel, A.; Movellan, J.R. Socialization between toddlers and robots at an early childhood
education center. Proc. Natl. Acad. Sci. USA 2007,104, 17954–17958. [CrossRef] [PubMed]
9.
Alaˇc, M.; Movellan, J.; Tanaka, F. When a robot is social: spatial arrangements and multimodal semiotic
engagement in the practice of social robotics. Soc. Stud. Sci. 2011,41, 893–926. [CrossRef] [PubMed]
10. Jones, R.A. Personhood and Social Robotics; Routledge: London, UK, 2015.
11.
Gordon, G.; Spaulding, S.; Kory, J.; Westlund, J.; Lee, J.; Plummer, L.; Martinez, M.; Das, M.; Breazeal, C.
Affective personalization of a social robot tutor for children’s second language skills. In Proceedings of the
AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 3951–3957.
12.
Revell, T. Kids can pick up attitude from robots they play and learn with. Available
online: https://www.newscientist.com/article/2121801-kids-can-pick-up-attitude-from-robots-they-play-
and-learn-with/ (accessed on 11 May 2018).
13.
Serholt, S. Breakdowns in children
'
s interactions with a robotic tutor: A longitudinal study. Comput. Hum.
Behav. 2018,81, 250–264. [CrossRef]
14.
Condliffe, J. What happens when robots become role models. Available online: https://www.technologyreview.
com/s/603708/what-happens-when-robots-become-role-models/ (accessed on 11 May 2018).
15.
Truong, A. Parents are worried the Amazon Echo is conditioning their kids to be rude. Available online: https:
//qz.com/701521/parents-are-worried-the-amazon-echo-is-conditioning-their-kids-to-be-rude/ (accessed
on 11 May 2018).
16.
Davis, A. Alexa-style robots could ‘help shy children put up their hands’, says head. Available
online: https://www.standard.co.uk/news/education/alexa-robots-could-help-shy-children-put-up-their-
hands-says-head-a3829346.html (accessed on 14 May 2018).
17.
Seibt, J. ‘Integrative Social Robotics’—A new method paradigm to solve the description problem and the
regulation problem? In What Social Robots Can and Should Do; Seibt, J., Nørskov, M., Schack Andersen, S., Eds.;
IOS Press: Amsterdam, The Netherlands, 2016.
18.
Schönpflug, U. Intergenerational transmission of values: The role of transmission belts. J. Cross-Cultural Psychol.
2001,32, 174–185. [CrossRef]
19.
Prensky, M. Digital natives, digital immigrants part 1. Available online: https://www.marcprensky.com/
writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf (accessed on
20 June 2018).
20.
Inglehart, R. Culture Shift in Advanced Industrial Society; Princeton University Press: Princeton, NJ, USA, 1990.
21.
Special Eurobarometer 382: Public attitudes towards robots, 2012. Available online: http://ec.europa.eu/
public_opinion/archives/ebs/ebs_382_en.pdf (accessed on 3 March 2017).
22.
Survey across EMEA, Britons most skeptical of robots, AI for healthcare. Available online: https://www.
emarketer.com/Article/Survey-Across-EMEA-Britons-Most-Skeptical-of-Robots-AI-Healthcare/1015681
(accessed on 19 April 2017).
23.
MacDorman, K.F.; Vasudevan, S.K.; Ho, C.-C. Does Japan really have robot mania? Comparing attitudes by
implicit and explicit measures. AI Soc. 2009,23, 485–510. [CrossRef]
24.
Griffin, A. Saudi Arabia grants citizenship to a robot for the first time. Available online:
https://www.independent.co.uk/life-style/gadgets-and-tech/news/saudi-arabia-robot-sophia-
citizenship-android-riyadh-citizen-passport-future-a8021601.html (accessed on 13 May 2018).
25.
Bronfenbrenner, U. Developmental ecology through space and time: A future perspective. In Examining
Lives in Context: Perspectives on the Ecology of Human Development; Moen, P., Elder, G.H., Jr., Luscher, K., Eds.;
American Psychological Association: Washington, DC, USA, 1995.
26.
Bronfenbrenner, U. The Ecology of Human Development; Harvard University Press: Cambridge, MA, USA, 1979.
Information 2018,9, 152 10 of 11
27.
Jones, R.A. ‘If it’s not broken, don’t fix it?’ An inquiry concerning the understanding of child-robot interaction.
In What Social Robots Can and Should Do; Seibt, J., Nørskov, M., Schack Andersen, S., Eds.; IOS Press:
Amsterdam, The Netherlands, 2016.
28.
Vélezagosto, N.M.; Soto-Crespo, J.G.; Vizcarrond-ooppenheimer, M.; Vegamolina, S.; García Coll, C.
Bronfenbrenner’s bioecological theory revision: Moving culture from the macro into the micro.
Pers. Psych. Sci. 2017,12, 900–910. [CrossRef] [PubMed]
29.
Kahn, P.H.; Friedman, B.; Pérezgranados, D.R.; Freier, N.G. Robotic pets in the lives of preschool children.
Interact. Stud. 2006,7, 405–436. [CrossRef]
30.
Kahn, P.H.; Gary, H.E.; Shen, S. Children’s social relationships with current and near future robots.
Child Dev. Pers. 2013,7, 32–37. [CrossRef]
31.
Kahn, P.H.; Reichert, A.L.; Gary, H.E.; Kanda, K.; Ishiguro, H.; Shen, S.; Ruckert, J.H.; Gill, B. The new
ontological category hypothesis in human–robot interaction. In Proceedings of the 6th ACM/IEEE
International Conference on Human-Robot Interaction, New York, NY, USA, 6–9 March 2011; pp. 159–160.
32.
Severson, R.L.; Carlson, S.M. Behaving as or behaving as if? Children’s conceptions of personified robots
and the emergence of a new ontological category. Neural Netw. 2010,23, 1099–1103. [CrossRef] [PubMed]
33. Turkle, S. Alone Together; Basic Books: New York, NY, USA, 2011.
34.
Hatfield, J.M. Social engineering in cybersecurity: The evolution of a concept. Comput. Secur.
2018
,73,
102–113. [CrossRef]
35.
Luo, X.; Brody, R.; Seazzu, A.; Burd, S. Social engineering: The neglected human factor for information
security management. Inf. Resour. Manag. J. 2011,24, 1–8. [CrossRef]
36.
Mouton, F.; Malan, M.M.; Kimppa, K.K.; Venter, H.S. Necessity for ethics in social engineering research.
Comput. Secur. 2015,55, 114–127. [CrossRef]
37. Gray, J. An Efficient Remedy for the Distress of Nations; Adam and Charles Black: Edinburgh, UK, 1842.
38. Earp, E.L. The Social Engineer; Eaton & Mains: New York, NY, USA, 1911.
39.
Boden, M.; Bryson, J.; Caldwell, D.; Dautenhahn, K.; Edwards, L.; Kember, S.; Newman, P.; Parry, V.;
Pegman, J.; Rodden, T.; et al. Principles of robotics: regulating robots in the real world. Connect. Sci.
2017
,29,
124–129. [CrossRef]
40.
British Standards Institution. Robots and Robotic Devices: Guide to The Ethical Design an Application of Robots
and Robotic Systems; British Standards Institution: London, UK, 2016.
41.
Foundation for Responsible Robotics. Available online: https://responsiblerobotics.org/about-us/mission/
(accessed on 7 May 2018).
42. Popper, K.R. The Open Society and Its Enemies; Princeton University Press: Princeton, NJ, USA, 2013.
43. Jones, R.A. Concerning the apperception of robot-assisted childcare. Philos. Technol. 2018. [CrossRef]
44.
Chant, R. Robot nannies: Should gadgets raise your kids? Available online:. Available online: http://
theinstitute.ieee.org/ieee-roundup/blogs/blog/robot-nannies-should-gadgets-raise-your-kids (accessed
on 8 May 2018).
45. Sharkey, A. Should we welcome robot teachers? Ethics Inf. Technol. 2016,18, 283–297. [CrossRef]
46.
Sharkey, N.; Sharkey, A. The crying shame of robot nannies: An ethical appraisal. Interaction Stud.
2010
,11,
161–190. [CrossRef]
47.
Serholt, S.; Barendregt, W.; Vasalou, A.; Alves-Oliveira, P.; Jones, A.; Petisca, S.; Paiva, A. The case of
classroom robots: Teachers’ deliberations on the ethical tensions. AI Soc. 2017,32, 613–631. [CrossRef]
48.
Van den, H.H.J.; Lamers, M.; Verbeek, F. Understanding the artificial. Int. J. Soc. Robot.
2011
,3, 107–109.
[CrossRef]
49.
Movellan, J.R. Warning: The author of this document may have no mental states. Read at your own risk.
Interact. Stud. 2010,11, 238–245. [CrossRef]
50.
Carmody, T. Let your children play with robots. Available online: https://www.wired.com/2010/10/
children-robots (accessed on 9 May 2018).
51.
Scassellati, B.; Admoni, H.; Mataric, M. Robots for use in autism research. Annu. Rev. Biomed. Eng.
2012
,14,
275–294. [CrossRef] [PubMed]
52.
Kientz, J.A.; Goodwin, M.S.; Hayes, G.R.; Abowd, G.D. Interactive Technologies for Autism; Morgan & Claypool:
San Rafael, CA, USA, 2014.
Information 2018,9, 152 11 of 11
53.
Kahn, P.H.; Friedman, B.; Hagman, J. ‘I Care about him as a Pal’: Conceptions of robotic pets in online AIBO
discussion forums. In Proceedings of the CHI Extended Abstracts on Human Factors in Computing Systems,
Minneapolis, MN, USA, 20–25 April 2002; pp. 632–633.
54.
Melson, G.F. Child development robots: Social forces, children’s perspectives. Interact. Stud.
2010
,11,
227–232. [CrossRef]
55.
Šabanovi´c, S. Robots in society, society in robots mutual shaping of society and technology as a framework
for social robot design. Int. J. Soc. Robotics 2010,2, 439–450. [CrossRef]
56.
Brooks, R.A. Will robots rise up and demand their rights? Available online: http://content.time.com/time/
magazine/article/0,9171,997274,00.html (accessed on 11 May 2018).
57.
Kaerlein, T. Minimizing the human? Functional reductions of complexity in social robotics and their
cybernetic heritage. In Social Robots from a Human Perspective; Vincent, J., Taipale, S., Sapio, B., Lugano, G.,
Fortunati, L., Eds.; Springer: Cham, Switzerland, 2016.
58.
Tay. Available online: https://web.archive.org/web/20160414074049/https://www.tay.ai/ (accessed on
8 May 2018).
59.
Mason, P. The racist hijacking of Microsoft’s chatbot shows how the internet teems with hate. Available online:
https://www.theguardian.com/world/2016/mar/29/microsoft-tay-tweets-antisemitic-racism (accessed
on 10 May 2018).
60.
Taylor, C. The dialogical self. In Rethinking Knowledge; Goodman, R.F., Fisher, W.R., Eds.; State University of
New York Press: Albany, NY, USA, 1995.
61.
Veruggio, G.; Operto, F.; Bekey, G. Roboethics: Social and ethical implications. In Springer Handbook of
Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016.
©
2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).