ArticlePDF Available

Abstract

We propose that virtue ethics can be used to address ethical issues central to discussions about sex robots. In particular, we argue virtue ethics is well equipped to focus on the implications of sex robots for human moral character. Our evaluation develops in four steps. First, we present virtue ethics as a suitable framework for the evaluation of human–robot relationships. Second, we show the advantages of our virtue ethical account of sex robots by comparing it to current instrumentalist approaches, showing how the former better captures the reciprocal interaction between robots and their users. Third, we examine how a virtue ethical analysis of intimate human–robot relationships could inspire the design of robots that support the cultivation of virtues. We suggest that a sex robot which is equipped with a consent-module could support the cultivation of compassion when used in supervised, therapeutic scenarios. Fourth, we discuss the ethical implications of our analysis for user autonomy and responsibility.
This manuscript accepted by the International Journal of Social Robotics on September 11, 2019.
Please cite published article available at: http://dx.doi.org/10.1007/s12369-019-00592-1
Designing virtuous sex robots
Anco Peeters ·Pim Haselager
Received: date / Accepted: date
Abstract We propose that virtue ethics can be used
to address ethical issues central to discussions about
sex robots. In particular, we argue virtue ethics is well
equipped to focus on the implications of sex robots for
human moral character. Our evaluation develops in four
steps. First, we present virtue ethics as a suitable frame-
work for the evaluation of human–robot relationships.
Second, we show the advantages of our virtue ethical
account of sex robots by comparing it to current in-
strumentalist approaches, showing how the former bet-
ter captures the reciprocal interaction between robots
and their users. Third, we examine how a virtue ethical
analysis of intimate human–robot relationships could
inspire the design of robots that support the cultiva-
tion of virtues. We suggest that a sex robot which is
equipped with a consent-module could support the cul-
tivation of compassion when used in supervised, ther-
apeutic scenarios. Fourth, we discuss the ethical impli-
cations of our analysis for user autonomy and respon-
sibility.
Keywords sex robots ·virtue ethics ·human–robot
interaction ·empathy
Introduction
Some may find it hard to come to grips with sex robots.
Yet recent events, like the 2015 Campaign Against Sex
Robots in the UK, the 2017 publication of John Danaher
Anco Peeters
Faculty of Law, Humanities and the Arts (Building 19),
University of Wollongong NSW 2522, Australia. E-mail:
mail@ancopeeters.com
Pim Haselager
Donders Institute for Brain, Cognition and Behaviour, Rad-
boud University, Nijmegen, the Netherlands.
and Neil McArthur’s volume on the ethical and societal
implications of robot sex [18], and the fourth incarna-
tion of the International Conference on Love and Sex
with Robots, show that this topic has captured the pub-
lic’s eye and provokes serious academic debate. A re-
cent report by the Foundation for Responsible Robotics
[42] calls for a broad and informed societal discussion
on intimate robotics, because manufacturers are taking
initial steps towards building sex robots. We take up
this call by applying virtue ethics to analyse intimate
human–robot relationships.
Why should we look at such relationships through
the lens of virtue ethics? Virtue ethics is one of the
three main ethical theories on offer and distinguishes
itself by putting human moral character centre stage
– as opposed to the intentions or consequences of ac-
tions. Virtue ethics has been discussed in relation to
artificial intelligence more generally [54,49]. However,
virtue ethics has received relatively little attention in
discussions regarding sex with robots, even though sex
robots could have a significant impact on their user’s
moral character. Two main exceptions are Litska Strik-
werda [48], who assesses arguments against the use of
child sex robots, and Robert Sparrow [47], who sug-
gests that rape representation by robots could encour-
age the cultivation of vices. Our aims are different, as
we will not focus on either child sex robots or robots
that play into rape fantasies. Instead, we propose how
virtue ethics can be used to contribute to the potential
positive aspects of intimate human–robot interactions
through the cultivation of virtues, and provide sugges-
tions for the design process of such robots.
We develop our thesis in four steps. First, we present
virtue ethics in relation to other ethical theories and
argue that, because of its focus on the situatedness
of human moral character, virtue ethics is in a bet-
2 Anco Peeters, Pim Haselager
ter position to assess aspects of intimate human-robot
interaction (see also [51, p. 209]). Second, we show how
our virtue ethical account fares better than current in-
strumentalist approaches to sex robots, such as those
inspired by the seminal and pioneering work of David
Levy [32,33]. Such instrumentalist approaches focus too
much on the usability aspects of the interaction and, un-
justly, frame sex robots as neutral tools. Understanding
the interaction with a sex robot as mere consumption
insufficiently acknowledges the risk of their influence
on how humans think about and act on love and sex.
Third, we propose a way to reduce the risks identified
by considering how the cultivation of compassion as a
virtue may help in practising consent-scenarios in ther-
apeutic settings. This way, we aim to show how, under
certain conditions, love and sex with robots might ac-
tually help to enhance human behaviour. Fourth, we
examine the implications our virtue ethical analysis on
intimate human–robot relations may have on our un-
derstanding of autonomy and responsibility.
1 Virtue ethics and social robotics
Current ethical debates on human–robot interaction are
generally not framed in terms of virtues, but in terms
of action outcomes or rules to be followed. It strikes us
as regrettable that up until now, virtue ethics has re-
ceived relatively little attention in the literature on so-
cial robotics in general, and on intimate human-robot
relations in particular (but see [3,27]). A virtue-ethical
analysis can help evaluate how, on the one hand, hu-
man agents could make use of love and sex robots in
ways that may be judged to be (un)problematic. On
the other hand, virtue ethics may help to clarify how
human behaviour and societal views are influenced by
the use of such robots and thereby help us to learn more
about what it is to be a virtuous person in an intimate
relationship. To establish the potential of virtue ethics
for the evaluation of intimate human–robot relation-
ships, we will examine aspects of virtue ethics relevant
to the current discussion and consider what it has to
add compared to other ethical approaches.
Virtue ethics departs from the idea that the cultiva-
tion of human character is fundamental to questions of
morality. In the Western philosophical tradition, Aris-
totle’s theory of virtue ethics is the most influential and
he defines virtue as an excellent trait of character.1Such
1Other influential virtue ethical traditions originated with,
for example, Confucius or Buddhism. For reasons of space,
we shall restrict ourselves to a (neo-)Aristotelian account of
virtue, but we suspect that the investigation of other virtue
traditions could yield an interesting intercultural approach to
the ethics of social robotics. See also [51].
traits, like honesty, courage and compassion, are stable
dispositions to reliably act in the right way according to
the situation one is in. Aristotle describes a virtue as, in
general, the right mean between two extremes (vices).
He states that courage, for example, can be described
as the mean between recklessness and cowardice (Nico-
machean Ethics, II.1104a7). Finding the right middle
between extremes is a challenging task and approaching
that middle often requires extensive practice. In addi-
tion to practice, acquiring a virtue is helped by instruc-
tion from a exemplary teacher. A virtuous person will
have cultivated her character to be disposed to natu-
rally act in the right way in the relevant situation. It
should be noted that although virtues are not about
singular acts, acting honestly, courageously or compas-
sionately may help a person to become honest, coura-
geous or compassionate. This potential interactive loop,
of internalising behaviour by practice and feedback, mo-
tivates our interest in applying virtue ethics to intimate
human-robot interaction.
Consequentialism and deontology are the two main
rival theories to virtue ethics, and they dominate cur-
rent discussions on the ethics of social robotics. Conse-
quentialism is the ethical doctrine that takes the out-
come of an action as fundamental to normative ques-
tions. Deontology or duty-based ethics takes the princi-
ples motivating an action as central to matters of moral-
ity. Operationalization of these frameworks can take dif-
ferent forms. For example, in the case of consequential-
ism, artificial agents could be programmed to evaluate
the potential costs and benefits of an action [20, 55, 41,
25]. Or, in the case of deontology, designers may strive
to implement top-level moral rules in agents [19].2As
consequentialism and deontology provide frameworks
that can be translated relatively straightforward into
implementation guidelines, they may be attractive from
a roboticist’s perspective. While we value the contribu-
tions of consequentialist and deontological approaches
to the literature on robot ethics, we think that there
are ethical issues which virtue ethics is in a better posi-
tion to address. Such issues includes how, in the words
of Shannon Vallor [51], advances in social robots are
“shaping human habits, skills, and traits of character
for the better, or for worse” (p. 211). Importantly, this
insight supports the idea that robots are not neutral
instruments, but that they may influence the way we
think and act. We side, therefore, with other researchers
2Isaac Asimov’s famous laws of robotics, often cited as
illustration in the ethics of AI literature, are modelled after
deontological formulations of how one ought to act. They bril-
liantly showcase the inherent tension between deontological
robotic directives and the potentially disastrous consequences
that strict adherence to these might have.
Designing virtuous sex robots 3
Fig. 1 The Sociable Trash Box exhibits helpfulness and po-
liteness when it requests trash and then bows after receiving
it. Reprinted by permission from Springer Nature: Springer
International Journal of Social Robotics [56], c
2019.
who recognize that virtue ethics can be a fruitful frame-
work for AI and robotics [3, p. 37].
There are at least three ways in which virtues (and
vices) might play a role in social robotics. First, we
may consider which virtues are or ought to be involved
on the human side of robot design. For instance, is it
desirable that a roboticist exhibits unbiasedness and
inclusiveness when designing a robot? Second, robots
may nudge users towards virtuous (or vicious) beha-
viour. An exercise robot, for example, can encourage
proper exercise and discipline by giving positive feed-
back to its user. Third, robots may exhibit virtues (and
vices) through their own behaviour. This can be illus-
trated by the Sociable Trash Box robot developed at
Michio Okada’s lab at Toyohashi University of Tech-
nology [56]: these robots exhibit helpfulness and polite-
ness through their vocalisations and bowing behaviour
when they collaborate with humans to dispose of trash
(see Figure 1). So one could focus on the virtues of the
designer, on the way robot behaviour affects the virtues
of a human interacting with it, or on the virtues dis-
played by the robot, for instance, as an example to be
followed or learned from. We will focus on the latter
two points, but towards the end discuss their implica-
tions for design. We think it is likely that the degree
of anthropomorphism [45,46,15, 11] will play an impor-
tant role for especially the second and third topics. This
needs to be further investigated, but for the purposes of
this paper we will discuss robots that tend towards the
anthropomorphic rather than the more functional end
– like conventional sex toys – of the anthropomorphism
spectrum.
In relation to the third aspect, some have said that
virtues might be difficult, or even intractable, to imple-
ment in a robot. This idea is motivated by the com-
plexity of giving general, context-independent defini-
tions of specific virtues and because an implementation
of a virtue like honesty “requires an algorithm for deter-
mining whether any given action is honestly performed”
[5, p. 258]. Although we acknowledge the specific im-
plementation challenges that virtue ethics brings, we
think these challenges can be addressed by looking at
the underlying mistaken assumption that virtues need
to be implemented top-down into the robot. Analogous
to how humans learn to be virtuous not by being told
what to do but by example, implementing virtues into
the design of social robots can take a similar situa-
tional approach. For this reason, it has been argued
that the “virtue-based approach to ethics, especially
that of Aristotle, seems to resonate well with the [...]
connectionist approach to AI. Both seem to emphasize
the immediate, the perceptual, the non-symbolic. Both
emphasize development by training rather than by the
teaching of abstract theory” [27, p. 249]. This resem-
blance, we suggest, can help inspire the implementa-
tion of virtues in modern-day robots. The use of ma-
chine learning with artificial neural networks may be
a way of avoiding the need to write an algorithm that
specifies what action needs to be taken when. Virtues
that depend on, for example, recognizing emotions in a
human and require an emotional response can be imple-
mented by training a neural network on selected input
– say, by analysing videos of previously screened em-
pathic responses made by humans (as done by [31,29]).
Through machine learning, robots could similarly learn
to mimic certain behaviours that we might consider dis-
plays of virtue, such as a light touch on the shoulder
to express sympathy. The challenging research question
here would be how to operationalize this kind of train-
ing so that the robot learns from human teachers. Such
implementations are not trivial, but they need not be
intractable either.
Two potential points of critique need to be addressed
before moving on. The first critique has been voiced by
robot ethicist Robert Sparrow [47], who argues that sex
robots could encourage vicious behaviour, while at the
same time maintaining that he finds it hard to imag-
ine sex robots could promote virtue. He proposes that
if people own sex robots, they can live out whatever
fantasies they have on the robots – even rape. He ar-
gues that repeated fantasizing and repeated exercise
of potential representations of rape will influence one’s
character to become more vicious. Though we agree
with Sparrow’s premise that this development is prob-
lematic and deserves careful consideration, we disagree
4 Anco Peeters, Pim Haselager
with the conclusion drawn. While rape representation
might be facilitated by sex robots, this does not mean
that the production of such robots need always be eth-
ically inimical. Let us assume that rape-play between
two consenting adults is not necessarily morally wrong.3
What is potentially morally wrong in acting out this
scenario, is that it might normalize the associated re-
peated behaviour outside of a consensual context – the
cultivation of a vice. This could lead to unwanted de-
grading behaviour or generalization to other contexts
involving human-human interaction. The same risk of
inappropriate generalization applies to the scenario of
the human–robot interaction. In the case of humans,
this means that careful and continuous communication
about what is allowed and what is not is crucial: the
partners have to trust and respect each other in or-
der to safely play out the fantasy and stay aware of
the fact that it is a fantasy. Might a similar approach
be possible to intimate human-robot interactions? We
submit that there are ways to involve consent in the
case of intimate human-robot interaction aimed to pre-
vent the risk Sparrow is drawing attention to, without
condemning the manufacture and use of sex robots in
principle.4It would require us to rethink sex education
and the role sex robots can play in this, which we do in
Section 3. Interestingly, if one accepts that sex robots
may cultivate vices in humans, it seems possible that
such robots potentially also cultivate virtues.5
A second issue that needs addressing is a more gen-
eral critique against virtue ethics. It has been argued
that virtue ethics as an ethical theory is “elitist and
overly demanding and, consequently, it is claimed that
the virtuous life plausibly could prove unattainable”
[26, p. 223]. Why propose such a demanding ethical
3It is worth noting that on Sparrow’s account one will have
to bite the bullet and say that rape-play by consenting adults
is morally wrong as well. Not everyone will be willing to ac-
cept this implication.
4Obviously, the consent provided by a robot does not
amount to legally binding consent, just like the rape of a
robot would not constitute legal rape, for the simple reason
that a robot is not a legal person and not a sentient be-
ing. Hence, we are discussing here the implications of a robot
behaving in a certain way, not necessarily implying the ex-
istence of human-like cognitive, emotional states or identical
legal status.
5Sparrow [47] finds it “much less plausible that sustaining
kind and loving relationships with robots can be sufficient to
make us virtuous” (p. 473). He acknowledges, however, that
such a claim needs to be supported by an argument as to
why virtues are to be held against a standard different from
vices and that this is a topic for further discussion. We do not
share his intuition, though we agree with his latter point and
would furthermore like to add that more empirical data on
how human–robot interaction influences human behaviour is
needed – which is one of the motivations for the proposal in
Section 3 of the present paper.
theory for framing human-robot interaction? First, be-
cause virtue ethics can do justice to an assumption
we make, namely that intimate, sexual relations be-
tween humans and robots should be understood as bi-
directional. In this context, bi-directional means that
humans design robots, while the general availability of
such robots in turn may influence human practice of
and ideas on intimacy and love. In contrast, current
ways of thinking about intimate human-robot relations
often depart from an instrumental and unidirectional
assumption. Such rival accounts understand these rela-
tions as the usage of tools by humans and see any influ-
ence that robots may have on humans as value-neutral.
They are focused on the human perspective and there-
fore lose sight of important potential ethical implica-
tions of human-robot interaction, as we will argue in
Section 2 and as illustrated in Table 1. Our assumption
is in line with current developments in cognitive science
and philosophy of technology, which suggest that the
cognitive and moral dimensions of artefact interaction
need to be understood from a distributed perspective
that puts equal emphasis on agent and environment [52,
53,17,22].
Another and possibly even more exciting reason to
engage with virtue ethics, is that thinking about virtues
in relation to robots might actually help to make vir-
tuous behaviour more attainable. This might be done
through the habit-reinforcing guidance of humans by
robots designed to promote virtuous behaviour: either
by robots nudging human behaviour directly or by robots
exhibiting virtues themselves.
2 Contra instrumentalist accounts
Recent discussions on intimate human–robot relations
are often informed by the work of David Levy [32,33].
Levy argues that humans will have physically realistic,
human-like sex with robots and feel deep emotions for
and even fall in love with them. Although we laud the
pioneering work Levy has done to open up sex and love
with robots for serious academic discussion, we argue
that his framework fails to properly account for the
ethical and social implications involved.
Regarding sex, Levy suggests that, physically speak-
ing, realistic human-like sex with robots will be possible
in the near future. Though Levy paints a colourful his-
tory of the development of sex technologies, discussion
of this is not of prime importance for our argument and
we will not examine it further. For the present discus-
sion, we will assume that the physical aspects of these
robots can be worked out more or less along the lines
which Levy describes. Interestingly, Levy goes so far as
to say that “robot sex could become better for many
Designing virtuous sex robots 5
Table 1 Suppose we compare the multiple approaches in a hypothetical scenario where sexual consent is negotiated, verbal
or otherwise, between two human partners. This table aims to show how such a scenario can be analysed in the different ways
discussed in the present paper. This rough distinction should not be taken to mean that, for example, consequentialism cannot
talk about virtues. What distinguishes the different approaches is which concept they take to be central.
Consequentialism Deontology Virtue ethics Instrumentalism
Fundamental concept Action outcomes Moral rule Virtue Instrumental use
Concept applied Obtaining consent
maximizes well-being
for both parties.
Obtaining consent is
in accordance with
the rule: “Do unto
others as you would
be done by.”
Obtaining consent is
compassionate and
respectful.
Obtaining consent is
not necessary, unless
required for obtaining
satisfaction.
people than sex with humans, as robots surpass human
sexual technique and become capable of satisfying ev-
eryone’s sexual needs” [32, p. 249].
Regarding emotions and love, Levy suggests that it
is possible that humans can be attracted to and even
fall in love with robots. Without going into too much
unnecessary detail, his argument proceeds in four steps.
First, Levy lists what causes attraction of humans to
each other. Second, he considers how affective relation-
ships between humans and pets develop, and, third,
how such relationships develop between humans and
their virtual pets. Fourth and finally, Levy applies his
findings to human–robot relationships.
Through a careful examination of feelings of bond-
ing and attraction in humans, Levy comes to the con-
clusion that humans will likely develop similar feelings
of bonding and attraction for robots. A large role in
this narrative is reserved for the human tendency to
anthropomorphize artefacts (see [13,45]). He submits
that “each and every one of the main factors that psy-
chologists have found to be the major causes for humans
falling in love with humans, can almost equally apply
to humans falling in love with robots” [32, p. 128]. It
seems that there are no major hindrances for humans
to, at some point in the future, fall in love with their
robot. We can, in principle, agree, with this conclusion
and it furthermore looks like recent preliminary empir-
ical evidence supports it [40].
Obstacles on the path towards the use of love and
sex robots are deemed by Levy to be of a merely practi-
cal nature. The robots described are presented as taking
care and recognizing the needs of their human partner
– in terms of the feelings of bonding and attraction he
listed earlier. On several occasions [34,32, pp. 219, 233]
Levy compares sex with a robot to masturbation, and
uses that comparison as a reason why robot-sex would
prevent cheating on one’s partner [p. 234] – like in the
case of soldiers on a long-term mission. Moreover, Levy
describes this perspective on sex as a kind of “consump-
tion” [32, p. 242]. It is for this reason that we charac-
terize accounts such as Levy’s as ‘instrumentalist.’ Love
and sex robots, on such accounts, are merely tools to be
used or products to be consumed. However, we suggest
that such an instrumentalist perspective could lead to
practices that provide cause for concern. Also, we are
not convinced that a purely instrumentalist use of sex
robots would make many people “better balanced hu-
man beings” [p. 240].
A first concern is that framing robot-sex as con-
sumption underestimates the potential impact the ac-
ceptation of love and sex robots will have on the way
love and sex are perceived. Consider a world where your
“robot will arrive from the factory with these parame-
ters set as you specified, but it will always be possible
to ask for more ardour, more passion, or less, according
to your mood and energy level. At some point it will
not even be necessary to ask, because your robot will,
through its relationship with you, have learned to read
your moods and desires and to act accordingly” [32,
p. 129].
Why would people, when such partners are avail-
able, be content with any kind of relationship, emo-
tional or sexual, that would not adhere to this stan-
dard of perfection? Access to these robots would make
it tempting to view relationships as essentially one-
directional need-catering and effortless, especially per-
haps for adolescents who grow up with such access. This
is not how love and sex at present needs to be or even
generally is conceived, and it goes deeply against the
conception of a relationship as existing between two
or more equal persons. Seeing humanoid robots capa-
ble of emotional and sexual interaction as tools is like
being in a relationship with a slave. There lies an im-
portant question at the core of this issue, specifically
on whether there are ways of considering the relation-
ship between human and robot that are not slave-like.
However, this falls outside the scope of the current pa-
per (though for a beginning of an answer to this ques-
tion, see [15]). In any case, this comparison illustrates
the extent to which Levy’s framework is unidirectional,
which is further exemplified by his comparison of robot-
sex with masturbation. Masturbation, at least generally
6 Anco Peeters, Pim Haselager
speaking, is a solitary enterprise, and does not reflect
the reciprocal interaction that characterizes a typical
sex encounter between two partners.6Precisely because
robot-sex does not amount to either masturbation or
sex between consenting adults, one needs to address its
particular ethical implications.
The second worry is that the instrumentalist ap-
proach allows for downplaying the risk of addiction in-
herent in interacting with robots that can perceive and
immediately cater to their partner’s every need. Con-
sider how Levy describes that “robots will be programm-
able never to fall out of love with their human, and
they will be able to ensure that their human never falls
out of love with them” and “your robot’s emotion de-
tection system will continuously monitor the level of
your affection for it, and as that level drops, your robot
will experiment with changes in behaviour until its ap-
peal to you has reverted to normal” [32, p. 118]. This
sounds like the perfect gambling machine, which con-
stantly updates its rules according to its user’s desires
– though these robots are potentially far more addic-
tive than any currently existing gambling machine. We
think this issue is insufficiently addressed by instrumen-
talist approaches such as Levy’s, because, if one thinks
of robots as merely neutral tools, as he does, then any
risk of addiction rests solely on the shoulders of the user
and not on a robot or its designers. However, it is an
open question whether this is how robot-sex will be ex-
perienced by human users (or their significant others).
Rather, we suggest that robots are not merely neutral
tools.
A convincing argument in this regard is provided by
Peter-Paul Verbeek [53], who argues that for instance
an obstetric ultrasound is not merely a neutral tool,
a ‘looking glass’ into the womb. Its use raises impor-
tant ethical questions, like “What will we do when it
looks like our unborn child has Down syndrome?” or
social pressure such as “Why did you decide to let the
child [with Down syndrome] be born, given that you
knew and you could have avoided it?”, or more general
societal questions like “Is it desirable that ultrasonog-
raphy leads to a rise of abortions because of less se-
vere defects like a harelip?” [53, p. 27]. This shows that
the use of obstetric ultrasound influences our moral do-
main. It is naive to think that using technologies would
not shape our behaviour and societal practices. Instead,
it is better to think about this shaping of behaviour
while designing technology. Similarly, instead of seeing
robots as neutral tools, we should acknowledge that, for
instance, robots may evoke more emotions in us than
6This also illustrates that robot-sex is not or need not al-
ways be wrong. This would be as extravagant a claim as the
suggestion that masturbation is always wrong.
other tools do, as Matthias Scheutz [39] suggests. More
importantly perhaps, the design and use of intimate
robots presuppose or establish certain practices con-
cerning ‘appropriate intimacy.’ At the very least, these
practices and their underlying assumptions should be
elucidated.
Two conclusions can be drawn from the above ac-
count. First, humans and technologies should not be
seen as separately existing entities, with technology pro-
viding neutral products for human consumption. Sec-
ondly, ethical analyses are not based on pre-given ideas
or criteria, but need to re-evaluate how human-artefact
interaction may be influenced or radically changed by
new technologies. This means that stakeholders partici-
pating in the design of technologies have a responsibility
both in considering how their products will shape hu-
man behaviour and reflecting on the ethical issues that
may arise with the use of their product.
On this view, designers are “practical ethicists, us-
ing matter rather than ideas as a medium of moral-
ity” [53, p. 90]. In this framework there is room for the
moral aspects of technologies in a pragmatic context,
without it becoming a ‘thou shalt not’-like ethics. A
virtue-ethical approach is exactly what the topic of in-
timate relations with robots needs, because interacting
with a robot as an artificial partner is, even more so
than with a regular artefact, a relationship which in-
timately shapes our own dispositional behaviour and
societal views as well. On first sight, Levy seems open
to a more interactive view when he refers to Sherry
Turkle, taking up her line of thought in saying that he
“is certain that robots will transform human notions”
including “notions of love and sexuality” [32, p. 15]. The
way Levy discusses situatedness resonates with the no-
tions that humans and technologies should not be seen
as strictly separate entities and that certain concepts
are not pre-given but arise out of interaction between
humans and artefacts. Does that mean Levy has suc-
cessfully anticipated critique along the lines we have set
out? It does not.
Although Levy seems sensitive to the two notions
mentioned, in practice it is merely a lip-service to inter-
active human–technology approaches. His instrumen-
talist treatment of human–robot relations deals with
humans and robots in terms of isolated atoms with only
a one-way connection between them, from user to robot,
without any consideration of the larger reciprocal in-
teractive effects on behaviour and social practices. He
does not analyse robot-sex in terms of the structures
and situatedness he earlier described. Any instrumen-
talist framework will focus on the human, subject side
of things and portray robots as neutral artefacts to be
used. What Levy describes is a trend of an increasing
Designing virtuous sex robots 7
acceptation of robot sex, not how it would actually con-
stitute or change (our conceptions of) sex or intimate
relationships. Even if one agrees that masturbation is
not cheating – an open question, likely to be influenced
by many contextual factors – that does not necessarily
mean that having sex with a robot will not be consid-
ered as cheating. An intelligent android functions on
a distinctively different level of companionship than,
say, a vibrator. More dramatically, if instrumentalist
thinkers on the one hand argue that an intimate rela-
tionship with a robot is possible and imply that these
kinds of relationships can be as intense and realistic
as intimate relationships between humans, then they
should agree that being intimate with such a robot,
while in a relationship with someone else, could be con-
strued as cheating. At the very least, one has to con-
cede that robot-sex in such a scenario cannot simply
be equated to masturbation. In other words, even as-
suming that one would find it hard to imagine some-
one being jealous about one’s partner using a vibrator,
one could still imagine jealousy plays a role when one’s
partner engages in sexual activities with a very human-
looking and acting robot.7
The analysis we have given shows that instrumental-
ist approaches may leave crucial ethical considerations
unaddressed. Notions of love and sex will be changed
by the development of humanlike robots. But how will
these notions change? If we can have sex robots which
are “always willing, always ready to please and to sat-
isfy, and totally committed” [32, p. 229], what will that
do to the way we view relationships? An understanding
of robot-sex not as instrumental, neutral use of tools,
but as involving a reciprocal interaction between hu-
man agents, robots and their designers is required to
develop adequate answers to questions such as these.
This is where virtue ethics can provide a guide for eval-
uation of such interactions.
3 Consent practice through sex robots
In order to investigate how sex robots could make a pos-
itive contribution to human moral character, we draw
on virtue ethics for ideas on how to cultivate virtues
and connect those to insights from current empirical
data provided by literature on robotics and psychology.
7The Swedish science-fiction television drama ¨
Akta
anniskor (Real humans, 2012) depicts an example of this
when the relationship between Therese (Camilla Larsson)
and her husband turns sour because he grows jealous of her
‘hubot’ – a humanoid robot capable of exactly the functions
Levy discusses. This depiction is fictional of course, but the
force of the story at least casts doubt on any outright dis-
missal of the possibility that humans will become jealous of
robots.
Our aim is to avoid the problem of cultivating vices
through repeated unnegotiated practice – such as illus-
trated by Sparrow. Indeed, well designed robots may
create the possibility to actually improve attitudes and
behavioural habits regarding sex. First, consider the
human–sex robot rape play scenario again. Previously,
we argued that what is problematic about this scenario
is not the act between consenting adults itself, but the
potential normalization of behaviour it could lead to.
For instance, the human participant may become ac-
customed to immediate satisfaction of desires through
the use of a human-looking object and might extend
the involved behavioural patterns to objectify other hu-
mans.
One way of preventing unwanted behavioural pat-
terns is by providing sex robots with a module that can
initiate a consent scenario. Like consenting humans, a
robot and its human partner will have to communicate
carefully about the kind of interaction that will take
place and the human will be confronted by the subject-
like appearance and the behaviour of the robot. And
like in a relationship between humans, this communi-
cation could potentially result in the robot sometimes
not consenting and terminating the interaction. Such
interaction with a robot might prevent the practice of
unidirectional behavioural habits and a resulting in-
creased objectification of other humans.8This consid-
eration suggests that the potential psychological and
behavioural benefits of a consent-module will make it
at least worthy of investigation. One should notice too,
however, that a consent-module may negatively affect
the potential economic gains of sex-robot producers, a
consequence that is not our main concern here. Second,
there are potential benefits with respect to sex prac-
tice and cultural perception in general in the consent-
module, namely in cultivating the virtue of compassion.
Though we focus on compassion for the sake of limiting
the scope of this case study, other virtues, such as re-
spect, likely ought to play a role in consent-practice as
well. We take compassion here as the ability to care for
and open up to another person without losing sight of
one’s own needs and feelings. Virtuous displays of com-
8On the other hand, one might argue, as Sparrow does,
that a non-consenting robot could potentially facilitate (the
representation of) rape scenarios even more if the human
partner ignores the robot’s consent. We do not have a so-
lution for that problem here (although, for example, a simple
‘complete close-and-shutdown’ routine might be an option),
but it is a main reason why we later in this paper suggest
to test this kind of human–robot interaction in a therapeutic
setting first, as testing under supervision may give us new in-
sights on how to potentially deal with issues such as these. In
any case, we are not convinced that this argument is sufficient
to not further investigate the potential benefits of consenting
robots.
8 Anco Peeters, Pim Haselager
passion strike the right balance between care for others
and for oneself. Compassion can motivate a desire to
help others and we take it to be related to, though dis-
tinct from, empathy (see [28]).
A robot equipped with a consent-module could po-
tentially be used to investigate ways of improving con-
sent practice in general. Often, partners communicate
their willingness to engage in sex through nonverbal
cues [14]. Yet, because nonverbal cues can be ambigu-
ous, miscommunication can and does occur [2]. In re-
sponse, some governmental institutions have advocated
the need for active, verbal consent. The practice of ac-
tive consent has been met by at least two problems.
First, even verbal consent does not necessarily mean
that a partner is freely engaging in sex, because, for
example, social pressure or substance abuse may be in-
volved [35]. Second, explicit consent has met with cul-
tural resistance, as men and women generally believe
discussing consent decreases the chance that sex will
occur [30]. Still, active consent is seen as a crucial way
of combating sexual assault and rape, for example, at
college campuses [1,8,12]. There is a need to change
perceptions and practice, especially by men [4], con-
cerning healthy consent and sexual practices. Virtuous
sex robots – supervised – might help facilitate a much
needed cultural change in this regard by further inves-
tigating ways of navigating consent.
The advantage of using sex robots over traditional
top-down education is that the robots can provide a
kind of embodied training that helps adolescents in ne-
gotiating sexual consent. Interaction with a compassion-
cultivating sex robot could raise awareness of how these
scenarios could play out and alter behaviour through
training. A sex robot which not only can practise con-
sent scenarios with a human partner, but which can
actually cultivate a virtue like compassion could poten-
tially be used in sex education and therapy. A robot
cannot suffer and so any moral harm during education
or training will be minimized. It seems to us that com-
passion is a suitable virtue to be practised using sex
robots in sex education and therapy. If successful in
clinical trials, such robots can be used to support a
change in perception and behaviour of consensual sex
on a larger scale, and not just with adolescents.
One might be sceptical as to whether robots can fa-
cilitate a dependable long-term change in compassion –
both in negative or positive ways. It seems reasonable
not to judge this prematurely, as assessing the long-
term effects of sexual human-robot interactions requires
empirical investigation by sexologists and psychologists.
A number of interesting experiments on the influence
of social robots on human behaviour in more general
terms, have been done in the lab of Nicholas Chris-
takis. In one (virtual) experiment [43], humans were
placed into groups which had to perform a task. Un-
known to the participants, these groups also contained
robot agents. The robotic agents were programmed to
make occasional mistakes which adversely influenced
group performance. This behaviour led to the human
participants who collaborated directly with a robot, to
become more flexible in finding solutions that benefited
group performance. Similarly, a related experiment [50]
reported that humans who collaborated on a task with
robots which made occasional mistakes and acknowl-
edged their mistakes with an apology, became more so-
cial, laughing together more often, and more conversa-
tional.
The design of virtuous sex robots requires think-
ing about a setting in which to test and apply them.
A case study will give the constraints necessary for the
design to be specific and feasible. We further think that
building a robot which can operate in long-term inti-
mate relations in general first requires at least building
a robot which can operate on a smaller timescale with a
specific target audience. Furthermore, it would be nec-
essary to have the support of supervisors – next to the
AI researchers which should of course also be involved –
that have professional training in psychology or psychi-
atry. We therefore propose to start with testing virtuous
sex robots in a therapeutic setting.
As the specific target audience or participants, we
suggest to consider persons who have been diagnosed
with a narcissistic personality disorder (NPD [6]) as
the common medical understanding of NPD aligns well
with the previously given definition of compassion. We
propose to consider NPD patients who are already within
a therapeutic setting, as this means that testing can be
done in a controlled environment, under supervision of
professionals in psychiatry, psychology, and sexology.
The robot’s design, testing and development before-
hand should involve these same professionals, especially
regarding the potential effects of a robot’s refusal of
certain kinds of interaction. The anticipated link with
compassion can be found in the latest edition of the
Diagnostic and Statistical Manual of Mental Disorders
(DSM-5). In it, narcissism is described as a “pervasive
pattern of grandiosity, need for admiration, and lack of
empathy” [6]. Nine indicators are listed for narcissis-
tic behaviour, of which the third, fifth, and sixth are
of special interest for us here. Respectively, those in-
dicators are about the narcissist feeling special, being
exploitative in social relations, and lacking empathy. If
compassion as a virtue is the golden mean between two
extremes, then it seems that the narcissist, who feels
better than others and is self-obsessed, is at one extreme
Designing virtuous sex robots 9
of the spectrum.9We would describe this extreme (or
vice) as having the tendency to being overly involved
with oneself. Hence, training the virtue of empathy and
compassion would be most relevant for this focus group.
Designing and evaluating a robot aimed at influencing
the behaviour of persons is the most prominent, and
challenging, task to be set. Though there is a lack of
information on successful NPD treatments [21], there
is some preliminary evidence that empathic treatments
of those with NPD have positive effects [10].
Obviously, operationalizing our proposal requires care-
ful testing before the possibility of actual use in train-
ing is even considered, as the care for patients and the
safety of those potentially harmed by their conduct is
paramount. One potential worry might be, for example,
that people with narcissistic tendencies become more
proficient in their manipulations. Therefore, profession-
als involved would need to closely monitor the patients
and signal such possible undesired effects. These cau-
tionary words notwithstanding, the potential support
of compassionate robots for NPD treatments is in line
with the aforementioned preliminary evidence [10] and
worth further investigation.
The next step in making the robot ready to teach
compassion is by training it to give basic responses to
certain kinds of behaviour. As proposed before, this
could be done by training it on recordings of how com-
passionate people respond to different kinds of (inap-
propriate) behaviour. This means the robot has to rec-
ognize at least one extreme on the compassion spectrum
in terms of behaviour of its partner, and has to perform
behaviour appropriate to what it observes. Figuring out
what good identifiers of those extremes are and what
responses work best will need to draw heavily on the
expertise of the psychiatrists involved.
Compassion is considered here as the virtue which
lies between the extremes of only caring about oneself,
the narcissist, or of only caring about another person.
That means that a robot designed to treat these kinds of
disorders should be able to direct behaviour towards the
middle of the spectrum, where there can be a healthy
focus on both caring for oneself and caring for others.
9In the spirit of virtue ethics, one could consider Depen-
dent Personality Disorder (DPD) to be the other extreme on
the compassion spectrum [6]:
They are willing to submit to what others want, even
if the demands are unreasonable. Their need to main-
tain an important bond will often result in imbalanced
or distorted relationships. They may make extraordi-
nary self-sacrifices or tolerate verbal, physical, or sex-
ual abuse.
It would be interesting to investigate how love and sex robots
could be relevant for training and therapy for members of this
group as well.
We suggest that it may be worthwhile to investigate
whether and how such behaviours could be influenced
by a compassionate robot. If this turns out to have
promising results, work can be done on improving the
design and expanding the use of such robots for other
settings and for other groups of people.
4 Implications of virtuous sex robots
We have striven to demonstrate that virtue ethics pro-
vides a useful framework for analysing the implications
of sex robots, as well as for making recommendations
for the design and application of such robots. We con-
sider robot-sex as involving and supporting a reciprocal
interaction between human agents and robots instead of
as a form of uni-directional instrumental tool use. Ap-
plying virtue ethics led us to suggest a consent-module
for sex robots that could support the development or
strengthening of compassion in supervised, therapeu-
tic scenarios. As such, sex robots may contribute to
the cultivation of virtues in humans. However, virtue
ethics does come at a price. In addition to its poten-
tial of providing an interesting perspective of the issues
surrounding sex robots, it may also raise new problems.
As an illustration of the latter, we would like to briefly
reflect on two implications of implementing a consent-
module. Robots saying ‘no’ towards the human that
uses or owns them can lead to at least two related prin-
cipled problems and one big practical challenge.
First, robots that refuse to comply with the de-
mands or wishes of human beings may obstruct a per-
son’s autonomy, for example, as expressed by someone’s
immediate or long-term desires (see for a field study
in the context of service robots for elderly [9]). Sec-
ond, there is the threat of a responsibility gap. Finally,
there is the practical challenge of how to design such a
consent-module. We will offer some minor suggestions
to address the latter at the end of this section.
We will illustrate the problem of a user’s autonomy
by considering a simple example in a different context.
Imagine a beer robot, a simple system that keeps a stock
of beers cooled and that brings one on demand. Obvi-
ously, at some point this might result in intoxication of
the person demanding the beer. To what extent should
a (‘virtuous’) beer robot be enabled to refuse the de-
mands for another beer? Even though the consequences
of intoxication may be bad for the persons themselves,
as long as no one else or no one else’s property is hurt,
one might conclude that it is an expression of a per-
son’s autonomy to keep the beers coming. It is only or
at least primarily in the context of negative effects for
other persons or legal agents, that one could morally
or legally preclude someone from having their wishes
10 Anco Peeters, Pim Haselager
gratified. So, on the one hand, the human should be
in control, but at some point or in certain contexts it
could be legitimate or morally acceptable to limit the
amount of control a human may have.
Regarding the responsibility gap, the problem is that
when a human instructs a well-functioning robot to do
something, and the robot is programmed to refuse to
follow the instructions, all kinds of consequences may
follow from that refusal for which the human, in essence,
cannot or need not be held responsible. This leads to
the question: Who would be responsible or accountable
for any damages, psychological or physical, that may
ensue? Of course, problems regarding the consequences
of saying ‘no’ are not specific to virtue ethics. Rather,
they are a consequence of any view that implies that
robots under certain conditions should refuse specific
instructions. However, this is worth discussing here be-
cause our analysis of virtue ethics leads to proposal of a
consent-module, and its consequences should be noted.
In our brief discussion, we will try to focus as much as
possible on the specific nature of the ensuing problems
in the context of sex robots.
In order to address these issues of autonomy and
responsibility, we suggest considering the principle of
‘meaningful human control’. This principle has been
discussed in the contexts of military robots and self-
driving. The principle states that ultimately humans
should remain in control and carry (ultimate) responsi-
bility for robot decisions and actions [7]. However, it is
far from clear what this principle amounts to in prac-
tice, that is, what the requirements are for the robot so
that it is capable of enabling this principle. Filippo San-
toni de Sio and Jeroen van den Hoven [44] indicate that
humans merely ‘being in the loop’ or controlling some
parameters may be insufficient for meaningful control
if other parameters turn out to be more relevant to
the robot’s use or if the human lacks enough informa-
tion to appropriately influence the process. In addition,
possessing an adequate psychological capacity for (as-
sessing) appropriate action is required for meaningful
control, as is, thirdly, an adequate (legal) framework for
assessing responsibility for consequences. Santoni de Sio
and van den Hoven then analyse meaningful control in
terms of John Fischer and Mark Ravizza’s [24] theory of
guidance control. Guidance control is realized when the
decisional mechanism leading up to a particular beha-
viour is “moderately reason-responsive”, meaning that
in the case of good reasons to act (or not), the agent
can understand these reasons and decide to act (or not),
at least in several different relevant contexts. Moreover,
the decision-making mechanism should be “the agent’s
own”, in the sense that there are no excusing factors
such as being manipulated, drugged, or disordered.
This, admittedly brief, consideration of meaningful
guidance control provides a criterion that might be use-
ful for the consent-module. It provides ground to think
that when a human does not possess sufficient guidance
control, or, by robot compliance with human instruc-
tions, may lose such control, a robot could be justified
in non-compliance. This leads to two questions that
need to be answered before a virtuous sex robot can
be enabled with a consent-module, allowing it to refuse
commands:
1. Is the person giving the current command in a state
of meaningful human control?
2. Will complying with the current command lead to
a reduction of meaningful human control, such that
(1) is no longer the case?
In relation to the first question, the beer robot could
make use of relatively reliable physiological measure-
ments (like breath or blood analyses), or behavioural
observations (like slurred speech or coordination dif-
ficulties). It will be more difficult to figure out which
input patterns might engage the consent-module to gen-
erate refusals. Here too, the expertise of psychologists
and psychiatrists, in relation to NPD for instance, is re-
quired. The main suggestion here is that a DSM-5 clas-
sified disorder in itself constitutes a reason for at least
considering the possibility that the ability to act rea-
sonably and compassionately might be affected, or that
sound judgement and behavioural control might be im-
paired. Practically speaking, it would be relevant to in-
vestigate the extent to which data acquisition methods
related to emotion recognition and sexual harassment
might apply. Among potential indicators one could think
of, for example, the human’s lack of allowing turn-taking
in communication, tone of voice and body posture, ne-
glect of robotic non-verbal signals of non-interest, and
so on (see, e.g., [36,38]). As a second step, investiga-
tions regarding the applicability of machine learning
techniques are relevant (e.g., [23]).
The second question points to a difference between
the case of the beer robot and the virtuous sex robot.
In case of the beer, a prediction about the intoxica-
tion can be made on the basis of physiological variables.
Given certain physiological aspects, the time course of
the intoxication can be inferred with reasonable, and
legally satisfactory, certainty. An intoxication level close
to life-threatening alcohol-poisoning, just to mention
a relatively clear case, could result in justifiable robot
non-compliance. However, in the case of the virtuous
sex robot such a prediction about the consequences of
(non-)compliance is not as straightforward. For this rea-
son too, it bears emphasis that we are suggesting the
investigation of the consent-module within clinical con-
Designing virtuous sex robots 11
texts. Assuming, for the moment, agreement regarding
the appropriateness of a robot’s non-compliance in cer-
tain situations, there is still a further question about
how the non-compliance should be put into effect. We
just mention a few possibilities here. One option is that
a robot may refuse to comply, provide an explanation
in terms of its assessment of the potential negative con-
sequences, and provide information aimed at improved
self-understanding and self-control. Ideally, this could
result in a retraction of the instruction given. Another
option may be that the robot refuses and informs a
support group of, say, significant others or therapists.
A more extreme option would be that the robot re-
fuses and stops functioning altogether, by way of an
emergency close-and-shutdown operation. Finally, it is
worth noting that we may need to stretch our concepts
of autonomy and responsibility beyond the individual
and recast them in terms of open-ended and ecologi-
cal processes (see [16]). Unfortunately, picking up this
topic lies beyond the scope of the present paper.
Undoubtedly, many other issues and ways of ad-
dressing them surround the notion of a consent-module.
We have explicated the present ones to emphasize that
virtue ethics does not provide easy solutions. Rather,
it opens up a research domain in itself, one that comes
with its own set of promises and difficulties that will
need to be addressed.
Conclusion
The field of robotics advances rapidly and robot ethics
ought to keep up. In the foreseeable future, there will
be robots advanced enough to evoke, even if only for
a few minutes, the experience in humans that they are
interacting with another human being. Unless a ban is
implemented [37], which we do not want to rule out, it
is likely that love and sex relationships with robots will
be formed. How can we best understand and evaluate
such relationships? We have taken some initial steps
towards answering this question by arguing that virtue
ethics is better suited than instrumentalist approaches
to evaluate the subtleties of intimate human-robot re-
lationships. Next steps should involve careful testing
and with this in mind we have outlined how testing a
consent-module for robots in a therapeutic setting may
yield useful insights. Importantly, implications for user
autonomy and responsibility should remain in focus of
future research.
Some challenges are anticipated. First, the misuse of
sex robots could have a lasting impression on an adoles-
cent learning about intimate relationships, but there is
also a positive side to developing realistic looking and
acting love robots. Such robots could train people how
to behave confidently and respectfully in intimate rela-
tionships. In a therapeutic setting, such robots could be
used to improve empathy or increase self-love in persons
with respectively narcissistic or dependent personality
disorders.
Another challenge is society’s response to sex robots.
It is difficult if not impossible to predict how our con-
ceptions of love and sex will change with the introduc-
tion of love robots. One risk here is that a potential
societal taboo on love and sex with robots would lead
to fringe behaviours and scenes, similar to the domain
of drugs and prostitution. It is therefore important that
the topic of sex-robots, challenging, exciting, or revolt-
ing as it may appear to different parties, remains open
for investigation and discussion.
The implications of developing love and sex robots
are potentially huge and we have striven to tentatively
chart one path, a virtue theoretical approach, within
this domain. Advances in other robotic fields, like care
robots or military robots, might have analogous impli-
cations. In these areas too, we should avoid the mistake
of assuming that robots will not change the way we
view healthcare and warfare. On the contrary, we need
to consider and assess which of these changes would be
desirable or should be avoided. In any case, we would do
well to avoid the suggestion that all these developments
are necessarily bad. We suggest that there is the pos-
sibility, worthy to be investigated, that some changes
might be for the good. When we realize that the way
we design and use such robots is bound to affect us, we
can think about ways of improving ourselves through
the technology, by careful consideration and monitor-
ing.
Acknowledgements We dedicate this paper to our late col-
league, teacher, and friend Louis Vuurpijl, who, with infec-
tious enthusiasm, guided many students in their first steps
into the field of robotics. Many thanks to Nick Brancazio,
Miguel Segundo-Ortin, and several anonymous reviewers for
their feedback on a previous draft of this paper.
Compliance with Ethical Standards
Funding
Conflict of Interest The authors declare that they have no
conflict of interest.
References
1. Abbey, A.: Acquaintance rape and alcohol consumption
on college campuses: How are they linked? Journal of
American College Health 39(4), 165–169 (1991). DOI
10.1080/07448481.1991.9936229
12 Anco Peeters, Pim Haselager
2. Abbey, A.: Misperception as an antecedent of acquain-
tance rape: A consequence of ambiguity in communica-
tion between men and women. In: A. Parrot, L. Bechhofer
(eds.) Acquaintance rape: The hidden crime, pp. 96–111.
Academic press, New York (1991)
3. Abney, K.: Robotics, ethical theory, and metaethics: A
guide for the perplexed. In: P. Lin, K. Abney, G.A.
Bekey (eds.) Robot Ethics: The Ethical and Social Impli-
cations of Robotics, pp. 35–52. MIT Press, Cambridge,
MA (2012)
4. Adams-Curtis, L.E., Forbes, G.B.: College women’s ex-
periences of sexual coercion. Trauma, Violence, & Abuse
5(2), 91–122 (2004). DOI 10.1177/1524838003262331
5. Allen, C., Varner, G., Zinser, J.: Prolegomena to any fu-
ture artificial moral agent. Journal of Experimental &
Theoretical Artificial Intelligence 12(3), 251–261 (2000).
DOI 10.1080/09528130050111428
6. American Psychiatric Association: Personality disorders.
In: Diagnostic and statistical manual of mental disorders,
5 edn. American Psychiatric Association (2013). DOI
10.1176/appi.books.9780890425596.dsm18
7. Article 36: Killing by machine: Key issues for un-
derstanding meaningful human control (2015). URL
http://www.article36.org/autonomous-weapons/
killing-by- machine-key- issues-for-understanding- meaningful-human-control/
8. Banyard, V.L., Ward, S., Cohn, E.S., Plante, E.G., Moor-
head, C., Walsh, W.: Unwanted sexual contact on cam-
pus: A comparison of women’s and men’s experiences.
Violence and Victims 22(1), 52–70 (2007)
9. Bedaf, S., Draper, H., Gelderblom, G.J., Sorell, T.,
de Witte, L.: Can a service robot which supports inde-
pendent living of older people disobey a command? the
views of older people, informal carers and professional
caregivers on the acceptability of robots. International
Journal of Social Robotics 8(3), 409–420 (2016-06). DOI
10.1007/s12369-016-0336-0
10. Bender, D.S.: Mirror, mirror on the wall: Reflecting on
narcissism. Journal of Clinical Psychology 68(8), 877–
885 (2012). DOI 10.1002/jclp.21892
11. Bj¨orling, E.A., Rose, E., Davidson, A., Ren, R., Wong,
D.: Can we keep him forever? teens’ engagement and de-
sire for emotional connection with a social robot. In-
ternational Journal of Social Robotics (2019). DOI
10.1007/s12369-019-00539-6. Online first publication
12. Borges, A.M., Banyard, V.L., Moynihan, M.M.: Clarify-
ing Consent: Primary Prevention of Sexual Assault on
a College Campus. Journal of Prevention & Interven-
tion in the Community 36(1-2), 75–88 (2008). DOI
10.1080/10852350802022324
13. Breazeal, C.L.: Designing sociable robots. MIT Press,
Cambridge, MA (2002)
14. Byers, E.S., Heinlein, L.: Predicting initiations and re-
fusals of sexual activities in married and cohabiting cou-
ples. Journal of Sex Research 26, 210–231 (1989)
15. Cappuccio, M.L., Peeters, A., McDonald, W.: Sympa-
thy for Dolores: Moral consideration for robots based on
virtue and recognition. Philosophy & Technology pp. 1–
23 (2019). DOI 10.1007/s13347-019- 0341-y. Online first
publication.
16. Clark, A.: Soft selves and ecological control. In: D. Ross,
D. Spurrett, H. Kincaid, G.L. Stephens (eds.) Distributed
cognition and the will: Individual volition and social con-
text, pp. 101–122. MIT Press (2007)
17. Coeckelbergh, M.: Growing Moral Relations: Critique of
Moral Status Ascription. Palgrave (2012)
18. Danaher, J., McArthur, N. (eds.): Robot Sex. Social and
Ethical Implications. MIT Press, Cambridge, MA (2017)
19. Danielson, P.: Artificial morality: Virtuous robots for vir-
tual games. Routledge, London (1992)
20. Deng, B.: Machine ethics: The robot’s dilemma. Nature
523(7558), 24–26 (2015). DOI 10.1038/523024a
21. Dhawan, N., Kunik, M.E., Oldham, J., Coverdale, J.:
Prevalence and treatment of narcissistic personality dis-
order in the community: A systematic review. Com-
prehensive Psychiatry 51(4), 333–339 (2010). DOI
10.1016/j.comppsych.2009.09.003
22. Di Paolo, E.A., Buhrmann, T., Barandiaran, X.E.: Sen-
sorimotor Life: An Enactive Proposal. Oxford Uni-
versity Press, Oxford (2017). DOI 10.1093/acprof:oso/
9780198786849.001.0001
23. Fernandes, K., Cardoso, J.S., Astrup, B.S.: A deep learn-
ing approach for the forensic evaluation of sexual as-
sault. Pattern Analysis and Applications 21(3), 629–640
(2018). DOI 10.1007/s10044-018- 0694-3
24. Fischer, J.M., Ravizza, M.: Responsibility and Control:
A Theory of Moral Responsibility. Cambridge University
Press, Cambridge (1998)
25. Floridi, L., Sanders, J.W.: On the morality of artificial
agents. Minds and Machines 14(3), 349–379 (2004). DOI
10.1023/B:MIND.0000035461.63578.9d
26. Fr¨oding, B.E.E.: Cognitive enhancement, virtue ethics
and the good life. Neuroethics 4(3), 223–234 (2011). DOI
10.1007/s12152-010-9092-2
27. Gips, J.: Towards the ethical robot. In: K.M. Ford (ed.)
Android epistemology, pp. 243–252. MIT Press, Cam-
bridge, MA (1995)
28. Goetz, J.L., Keltner, D., Simon-Thomas, E.: Compas-
sion: An evolutionary analysis and empirical review.
Psychological Bulletin 136(3), 351–374 (2010). DOI
10.1037/a0018807
29. G¨cl¨ut¨urk, Y., G¨cl¨u, U., Bar´o, X., Escalante, H.J.,
Guyon, I., Escalera, S., van Gerven, M.A.J., van Lier,
R.: Multimodal first impression analysis with deep resid-
ual networks. IEEE Transactions on Affective Computing
pp. 1–1 (2017). DOI 10.1109/TAFFC.2017.2751469
30. Humphreys, T.P.: Understanding sexual consent: An em-
pirical investigation of the normative script for young
heterosexual adults. In: M. Cowling, P. Reynolds (eds.)
Making Sense of Sexual Consent. Ashgate (2004)
31. Janssen, J.H., Tacken, P., de Vries, J.G.J., van den Broek,
E.L., Westerink, J.H., Haselager, P., IJsselsteijn, W.A.:
Machines outperform laypersons in recognizing emo-
tions elicited by autobiographical recollection. Human–
Computer Interaction 28(6), 479–517 (2013). DOI
10.1080/07370024.2012.755421
32. Levy, D.: Intimate relationships with artificial partners.
Maastricht University, Maastricht (2007). Unpublished
doctoral dissertation.
33. Levy, D.: Love and Sex with Robots: The Evolution
of Human–Robot Relationships. Harper-Perennial, New
York (2007)
34. Levy, D.: The ethics of robot prostitutes. In: P. Lin,
K. Abney, G.A. Bekey (eds.) Robot Ethics: The Ethical
and Social Implications of Robotics, pp. 223–232. MIT
Press, Cambridge, MA (2012)
35. Lim, G.Y., Roloff, M.E.: Attributing sexual consent.
Journal of Applied Communication Research 27(1), 1–
23 (1999). DOI 10.1080/00909889909365521
36. Miranda, J.A., Canabal, M.F., Portela Garc´ıa, M.,
Lopez-Ongil, C.: Embedded emotion recognition: Au-
tonomous multimodal affective internet of things. In:
F. Palumbo, C. Pilato, L. Pulina, C. Sau (eds.) Proceed-
ings of the Cyber-Physical Systems Workshop 2018, vol.
2208, pp. 22–29. Alghero, Italy (2011)
Designing virtuous sex robots 13
37. Richardson, K.: Sex robot matters: Slavery, the pros-
tituted, and the rights of machines. IEEE Technol-
ogy and Society Magazine 35(2), 46–53 (2016). DOI
10.1109/MTS.2016.2554421
38. Rituerto-Gonz´alez, E., M´ınguez-S´anchez, A., Gallardo-
Antol´ın, A., Pel´aez-Moreno, C.: Data augmentation for
speaker identification under stress conditions to com-
bat gender-based violence. Applied Sciences 9(11), 2298
(2019). DOI 10.3390/app9112298
39. Scheutz, M.: The inherent dangers of unidirectional emo-
tional bonds between humans and social robots. In:
P. Lin, K. Abney, G.A. Bekey (eds.) Robot Ethics: The
Ethical and Social Implications of Robotics, pp. 205–222.
MIT Press, Cambridge, MA (2012)
40. Scheutz, M., Arnold, T.: Intimacy, bonding, and sex
robots: Examining empirical results and exploring eth-
ical ramifications. In: J. Danaher, N. McArthur (eds.)
Robot Sex. Social and Ethical Implications, pp. 247–260.
MIT Press, Cambridge, MA (2017)
41. Sharkey, N.: The ethical frontiers of robotics. Sci-
ence 322(5909), 1800–1801 (2008). DOI 10.1126/science.
1164582
42. Sharkey, N., van Wynsberghe, A., Robbins, S., Hancock,
E. (eds.): Our sexual future with robots: A foundation
for responsible robotics consultation report. Foundation
for Responsible Robotics (2017)
43. Shirado, H., Christakis, N.: Locally noisy autonomous
agents improve global human coordination in network
experiments. Nature 545(7654), 370–374 (2017). DOI
10.1038/nature22332
44. Santoni de Sio, F., van den Hoven, J.: Meaningful hu-
man control over autonomous systems: A philosophical
account. Frontiers in Robotics and AI 5, 1–14 (2018).
DOI 10.3389/frobt.2018.00015
45. Sparrow, R.: The march of the robot dogs. Ethics and
Information Technology 4(4), 305–318 (2002). DOI 10.
1023/A:1021386708994
46. Sparrow, R.: Kicking a robot dog. In: 2016 11th
ACM/IEEE International Conference on Human-Robot
Interaction (HRI), p. 229. IEEE (2016). DOI 10.1109/
HRI.2016.7451756
47. Sparrow, R.: Robots, rape, and representation. Interna-
tional Journal of Social Robotics 9(4), 465–477 (2017).
DOI 10.1007/s12369-017-0413-z
48. Strikwerda, L.: Legal and moral implications of child sex
robots. In: J. Danaher, N. McArthur (eds.) Robot Sex.
Social and Ethical Implications, pp. 133–152. MIT Press,
Cambridge, MA (2017)
49. Tonkens, R.: Out of character: on the creation of virtu-
ous machines. Ethics and Information Technology 14(2),
137–149 (2012). DOI 10.1007/s10676-012- 9290-1
50. Traeger, M., Sebo, S., Jung, M., Scassellati, B., Chris-
takis, N.: Vulnerable robots positively shape human con-
versational dynamics in a human-robot team. Unpub-
lished manuscript. Presented at Center for Empirical Re-
search on Stratification and Inequality Spring 2019 Work-
shop at Yale University on January 31, 2019
51. Vallor, S.: Technology and the virtues: A philosophical
guide to a future worth wanting. Oxford University Press,
Oxford (2016)
52. Varela, F.J., Thompson, E., Rosch, E.: The embodied
mind: Cognitive science and human experience. MIT
Press, Cambridge, MA (1991)
53. Verbeek, P.P.: Moralizing Technology: Understanding
and Designing the Morality of Things. University of
Chicago Press, Chicago, IL (2011)
54. Wallach, W., Allen, C.: Moral machines: Teaching robots
right from wrong. Oxford University Press, Oxford (2009)
55. Winfield, A.F.T., Blum, C., Liu, W.: Towards an eth-
ical robot: Internal models, consequences and ethi-
cal action selection. In: M. Mistry, A. Leonardis,
M. Witkowski, C. Melhuish (eds.) Advances in Au-
tonomous Robotics Systems, Lecture Notes in Computer
Science, vol. 8717, pp. 85–96. Springer (2014). DOI
10.1007/978-3-319-10401- 0\8
56. Yamaji, Y., Miyake, T., Yoshiike, Y., De Silva, P.R.S.,
Okada, M.: STB: Child-dependent sociable trash box.
International Journal of Social Robotics 3(4), 359–370
(2011). DOI 10.1007/s12369-011- 0114-y
... After, the publication of other books have followed (Danaher and McArthur, 2018;Devlin, 2018). Philosophically, the majority of authors approached the topic by exploring ethical considerations related to right and wrong, and judging if these interactions are desirable and salutary (Nyholm, 2022;Peeters and Haselager, 2021). Other authors approached the topic by making of epistemological inquiry the core of their research, while some placed metaphysics at the center (Folkmann, 2010;Frank and Nyholm, 2017;Nyholm, 2020). ...
... This includes, but it is not limited to, people with social anxiety, psychological trauma, or mobility problems. In this case, perhaps the robot must not be understood as an entertainment robot, but a therapeutic or educative one, which usage promotes positive sexual human-human interactions (Cox-George and Bewley, 2018;Eichenberg et al., 2019;Peeters and Haselager, 2021). ...
... Based on:Danaher (2017;,Kubes (2019),Peeters and Haselager (2021),Richardson (2015),Sparrow (2017),Strikwerda (2017). ...
Article
Full-text available
In this article, I present three scenarios regarding human sexual interactions with robots. I approach them by considering the development, distribution, and engagement with these technologies. These three scenarios, show different levels of advancements, potential stereotypes, and ethical dilemmas, clarifying the wide spectrum of expectations, principles, and social outcomes intertwined with sexual robotics. Underscoring the significance of individual decision-making in this domain, in this article I advocate for the Principle of Alternative Possibilities (PAP) as a relevant framework for understanding and assessing the moral implications of these choices. Due to this fact, I consider PAP to be a valid research framework for sexual robotics, since it respects the diversity of choices, upholds the moral agency of social agents, and addresses the ethical responsibilities inherent in decision-making processes. It is important to clarify, at the same time, that this article is conceptual and pre-experimental.
... On the one hand, AI agents can influence people's emotions during their interactions. AI agents can maintain longterm relationships and provide companionship to humans, while people can relate to AI agents in a tolerant and egalitarian manner (Ashrafian, 2015;Peeters & Haselager, 2021), thus triggering emotional responses and emotional communication (Broadbent, 2017;Rousi,2018). For example, AI agents can initiate conversations enthusiastically to lead people to see AI agents as warm (Kull et al., 2021). ...
... In human relationships, emotional care for the other person can lead to a sense of responsibility (Kong & Belkin, 2022), and this claim also applies to human interactions with AI agents. For example, AI sex robots can influence human emotions by capturing relationship changes, reading human emotions, and taking appropriate actions, thus encouraging the human to perceive the AI agent as responsible (Peeters & Haselager, 2021). ...
... Perceived dependence refers to people's sense of bonding with and dependence on the AI agent, which is motivated by trust in the AI agent. If people can perceive the trust and friendship expressed by the AI agent during their interactions with the AI agent, they become dependent on the AI agent (Peeters & Haselager, 2021). When people become dependent on the AI agent, they perceive that the AI agent can satisfy their emotional needs, such as feeling that the AI agent can accompany humans, and this perception leads people to believe that the AI agent facilitates understandable communication at an emotional level. ...
Article
As AI agents such as ChatGPT and Microsoft Little Ice begin to demonstrate human-like communication capabilities, they can be seen to respond to people reliably, accurately, and effectively and exhibit similar characteristics to humans in fulfilling their communication responsibilities; accordingly, machine communicative responsibility perception (MCRP) has been developed. Based on mind perception theory, this article explores the intrinsic mechanism underlying MCRP and its implications. MCRP is divided into two categories, i.e., perceived functional communicative responsibility and perceived emotional communicative responsibility, and the MCRP model is validated through the roles of AI advisor and AI partner being played by AI agents. The results show that the perceived functional and emotional communicative responsibility of AI agents significantly and positively influences people’s intention to interact with them on an ongoing basis; however, this effect varies across AI roles. For AI advisors, functional communicative responsibility perceptions have a greater impact, whereas for AI partners, emotional communicative responsibility perceptions are more important.
... Questions around designer babies, transhumanist projects, posthumanist transformations, humanoid sex robots and industrial robots (Andorno and Yamin 2019;Atanasoski and Vora 2020;Ball 2017, January 8;Doring, Rohangis Mohseni, and Walter 2020;Knoppers and Keiderman 2019;Masterson 2022;Melillo 2021;Moyo 2021;Nhemachena 2021b;Pang and Ho 2016;Peeters and Haselager 2021) are all aspects that design anthropology needs to deal with as it grapples with the emergent futures and with what is on the horizons of the twenty-first century. In a world that is populated by such hybrid intelligent entities, the anthropos ceases to be the sole subject of anthropological research, and humans cease to be the sole designers in the world. ...
Article
Full-text available
In their efforts to dissuade Africans from engaging fruitfully on matters of design, including design anthropology, colonialists dismissed the indigenes as only capable of designing witchcraft and sorcery for which they were sadly famed in colonial anthropology. Arguing that twenty-first century African universities need to include design anthropology in the curriculum, this paper contends that the future of anthropology, and of Africa, lies in design as is evident in discourses and practices on designer babies, designer humanoid sex robots, industrial robots, designer robotic spouses, synthetic biology, Artificial Intelligence, human enhancements, nanofabrication, biohacking, gene and genome editing, reverse engineering and rewiring humans, gene and genome deletion, social designs and so on. Drawing on autoethnography and extensive literature review, the paper argues that design anthropology is increasingly becoming relevant in a world that is rethinking modernist designs which are at the core of the Anthropocene. Put differently, design anthropology enables [African] graduates to engage with contemporary, empirical issues of design in a twenty-first-century world where the discipline can only survive by shifting focus from an obsession with sterile discourses about, inter alia, the past and present of African witchcraft, culture, society and sorcery. ARTICLE HISTORY
... Questions around designer babies, transhumanist projects, posthumanist transformations, humanoid sex robots and industrial robots (Andorno and Yamin 2019;Atanasoski and Vora 2020;Ball 2017, January 8;Doring, Rohangis Mohseni, and Walter 2020;Knoppers and Keiderman 2019;Masterson 2022;Melillo 2021;Moyo 2021;Nhemachena 2021b;Pang and Ho 2016;Peeters and Haselager 2021) are all aspects that design anthropology needs to deal with as it grapples with the emergent futures and with what is on the horizons of the twenty-first century. In a world that is populated by such hybrid intelligent entities, the anthropos ceases to be the sole subject of anthropological research, and humans cease to be the sole designers in the world. ...
Book
Full-text available
Preprint
Full-text available
The article explores the role of virtual reality technologies and sex robots in addressing the sexual needs of astronauts engaged in space missions. The positive impacts of sexual activity on physical and psychological health are highlighted, as well as the challenges astronauts will face in space due to the limited opportunity for emotional relationships. The need to develop innovative solutions, such as sex robots and the use of virtual reality to meet these needs, is analysed. The potential benefits and challenges of using virtual reality in space are compared with those of sex robots. We conclude by emphasising the importance of carefully considering astronauts' preferences and the operational context so as to determine the most appropriate technology to support their sexual needs in space.
Book
Full-text available
This book explores how social robots and synthetic social agents will change our social systems and intersubjective relationships. It is obvious that technology influences societies. But how, and under what conditions do these changes occur? This book provides a theoretical foundation of the social implications of artificial intelligence and robotics. It starts from philosophy of technology, with a focus on social robotics, to systematically explore the concept of socio-technical change. It addresses two main questions: To what extent will social robots modify our social systems? And how will human relationality be affected by human-robot interactions? The book employs resources from continental philosophy, Actor-Network theory, psychoanalysis, systemic theory, and constructivist cognitive theory to develop a theory of socio-technical change. It also offers a novel perspective on how we should evaluate the effectiveness of social robots, which has significant implications for how social robotics should be researched and designed.
Chapter
This chapter discusses the theme of the sexualization of AI, from a variety of angles. It examines the theme of feminized AI, which is embodied by virtual assistants and sexy robots. It also looks at sexbots as the epitome of sexualized AI and discusses debates on whether it would be possible to make them more feminist.
Book
Full-text available
This book offers a phenomenological perspective on the criminal law debate on robots. Today, robots are protected in some form by criminal law. A robot is a person’s property and is protected as property. This book presents the different rationale for protecting robots beyond the property justification based on the phenomenology of human-robot interactions. By focusing on robots that have bodies and act in the physical world in social contexts, the work provides an assessment of the issues that emerge from human interaction with robots, going beyond perspectives focused solely on artificial intelligence (AI). Here, a phenomenological approach does not replace ontological concerns, but complements them. The book addresses the following key areas: Regulation of robots and AI; Ethics of AI and robotics; and philosophy of criminal law. It will be of interest to researchers and academics working in the areas of Criminal Law, Technology and Law and Legal Philosophy.
Article
Full-text available
A Speaker Identification system for a personalized wearable device to combat gender-based violence is presented in this paper. Speaker recognition systems exhibit a decrease in performance when the user is under emotional or stress conditions, thus the objective of this paper is to measure the effects of stress in speech to ultimately try to mitigate their consequences on a speaker identification task, by using data augmentation techniques specifically tailored for this purpose given the lack of data resources for this condition. An extensive experimentation has been carried out for assessing the effectiveness of the proposed techniques. First, we conclude that the best performance is always obtained when naturally stressed samples are included in the training set, and second, when these are not available, their substitution and augmentation with synthetically generated stress-like samples improves the performance of the system.
Article
Full-text available
Today’s teens will most likely be the first generation to spend a lifetime living and interacting with both mechanical and social robots. Although human–robot interaction has been explored in children, adults, and seniors, examination of teen–robot interaction has been limited. In this paper, we provide evidence that teen–robot interaction is a unique area of inquiry and designing for teens is categorically different from other types of human–robot interaction. Using human-centered design, our team is developing a social robot to gather stress and mood data from teens in a public high school. To better understand teen–robot interaction, we conducted an interaction study in the wild to explore and capture teens’ interactions with a low-fidelity social robot prototype. Then, through group interviews we gathered data regarding their perceptions about social robots. Although we anticipated minimal engagement due to the low fidelity of our prototype, teens showed strong engagement and lengthy interactions. Additionally, teens expressed thoughtful articulations of how a social robot could be emotionally supportive. We conclude the paper by discussing future areas for consideration when designing for teen–robot interaction.
Article
Full-text available
This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and their unsophisticated social capabilities prevent any attribution of rights to robots, which are devoid of intrinsic moral dignity and personal status. On the other hand, we argue that another form of moral consideration—not based on rights attribution—can and must be granted to robots. The reason is that relationships with robots offer to the human agents important opportunities to cultivate both vices and virtues, like social interaction with other human beings. Our argument appeals to social recognition to explain why social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-social relationships as pseudo-persons. This recognition dynamic justifies seeing robots as worthy of moral consideration from a virtue ethical standpoint as it predicts the pre-reflective formation of persistent affective dispositions and behavioral habits that are capable of corrupting the human user’s character. We conclude by drawing attention to a potential paradox drawn forth by our analysis and by examining the main conceptual conundrums that our approach has to face.
Chapter
Philosophers and behavioral scientists discuss what, if anything, of the traditional concept of individual conscious will can survive recent scientific discoveries that human decision-making is distributed across different brain processes and through the social environment. Recent scientific findings about human decision making would seem to threaten the traditional concept of the individual conscious will. The will is threatened from “below” by the discovery that our apparently spontaneous actions are actually controlled and initiated from below the level of our conscious awareness, and from “above” by the recognition that we adapt our actions according to social dynamics of which we are seldom aware. In Distributed Cognition and the Will, leading philosophers and behavioral scientists consider how much, if anything, of the traditional concept of the individual conscious will survives these discoveries, and they assess the implications for our sense of freedom and responsibility. The contributors all take science seriously, and they are inspired by the idea that apparent threats to the cogency of the idea of will might instead become the basis of its reemergence as a scientific subject. They consider macro-scale issues of society and culture, the micro-scale dynamics of the mind/brain, and connections between macro-scale and micro-scale phenomena in the self-guidance and self-regulation of personal behavior. ContributorsGeorge Ainslie, Wayne Christensen, Andy Clark, Paul Sheldon Davies, Daniel C. Dennett, Lawrence A. Lengbeyer, Dan Lloyd, Philip Pettit, Don Ross, Tamler Sommers, Betsy Sparrow, Mariam Thalos, Jeffrey B. Vancouver, Daniel M. Wegner, Tadeusz W. Zawidzki Bradford Books imprint
Chapter
Robots designed for sexual interaction present distinctive ethical challenges to received notions of physical intimacy, pleasure, social relationships, and social space. In this chapter, we build upon our recent survey on attitudes toward sex robots with the results from a second, expanded survey that broaches possible advantages and disadvantages of interacting with such robots, both individually and socially. We show that the first study’s results were replicated with respect to appropriate forms, contexts, and uses for sex robots; in addition, we find a systematic concern with how robots might risk harming human relationships. We conclude that ethical reflection on sex robots must include a wider consider-ation of the impact of social robots as a whole, with finer-grained examination of how intimacy and companionship define human relationships.
Chapter
This chapter considers the legal and moral implications of creating sex robots that look and act like children. It does so by addressing the analogy between child sex robots and virtual child pornography. Entirely computer-generated child pornographic images are prohibited in many countries on the ground that (the majority of) people find them morally objectionable (legal moralism). If child sex robots were to be developed, they would (likely) be banned for the same reasons. Virtue ethics and (anti-porn) feminism explain why people find entirely computer-generated child pornography morally objectionable and why they would think the same about child sex robots. Both flout our sexual mentality based on equality, because they are respectively incomplete representations and replica of sexual relations between adults and children, which can never be considered equal.
Book
This book provides a comprehensive, systematic theory of moral responsibility. The authors explore the conditions under which individuals are morally responsible for actions, omissions, consequences, and emotions. The leading idea in the book is that moral responsibility is based on 'guidance control'. This control has two components: the mechanism that issues in the relevant behavior must be the agent's own mechanism, and it must be appropriately responsive to reasons. The book develops an account of both components. The authors go on to offer a sustained defense of the thesis that moral responsibility is compatible with causal determinism.
Conference Paper
The term Internet of Things (IoT) is spreading out in the industry and in the academic world, specifically in those parts focused on making a better-connected world. On top of IoT, trying to gather the best user experience with an interconnected world, the Affective Internet of Things (AIoT) is being used. AIoT uses sensing technology empowered with the capability of detecting or predicting the emotional or affective state of the person. This new IoT branch can be used not only to provide a better user experience, in which the machine or device knows what the user likes, but also to solve real and current sociological problems by detecting those situations based on the user's emotion, such as sexual aggressions. In this paper, Bindi, a new autonomous multimodal system based on AIoT for sexual aggression detection, is proposed. Within this context, Commercial off-the-shell (COTS) sensors together with a light simplified embedded machine learning approach for emotion recognition have been implemented within a low power, low resource, wireless and wearable Cyber-Physical System (CPS) 1 .
Poster
Affective Internet of Things (AIoT) uses sensing technology empowered with the capability of detecting or predicting the emotional or affective state of the person. In this paper, Bindi, a new autonomous multimodal system based on AIoT for sexual aggression detection, is proposed. Commercial off-the-shell (COTS) sensors together with a light simplified embedded machine learning approach for emotion recognition have been implemented within a low power, low resource, wireless and wearable Cyber-Physical System (CPS) * Motivation