ChapterPDF Available

White Lies on Silver Tongues: Why Robots Need to Deceive (and How)

Authors:

Abstract and Figures

It is easy to see that social robots will need the ability to detect and evaluate deceptive speech; otherwise they will be vulnerable to manipulation by malevolent humans. More surprisingly, we argue that effective social robots must also be able to produce deceptive speech. Many forms of technically deceptive speech perform a positive pro-social function, and the social integration of artificial agents will be possible only if they participate in this market of constructive deceit. We demonstrate that a crucial condition for detecting and producing deceptive speech is possession of a theory of mind. Furthermore, strategic reasoning about deception requires identifying a type of goal distinguished by its priority over the norms of conversation, which we call an ulterior motive. We argue that this goal is the appropriate target for ethical evaluation, not the veridicality of speech per se. Consequently, deception-capable robots are compatible with the most prominent programs to ensure that robots behave ethically.
Content may be subject to copyright.
CB.A Template Standardized - -  and Last Modied on --
WHITE LIES ONSILVER TONGUES
     ()
Alistair M.C. Isaac and Will Bridewell
Deception is a regular feature of everyday human interaction. When
speaking, people deceive by cloaking their beliefs or priorities in
falsehoods. Of course, false speech includes malicious lies, but it also
encompasses little white lies, subtle misdirections, and the literally
false gures of speech that both punctuate and ease our moment- to-
moment social interactions. We argue that much of this technically
deceptive communication serves important pro- social functions and
that genuinely social robots will need the capacity to participate in the
human market of deceit. To this end, robots must not only recognize
and respond eectively to deceptive speech but also generate decep-
tive messages of their own. We argue that deception- capable robots
may stand on rm ethical ground, even when telling outright lies.
Ethical lies are possible because the truth or falsity of deceptive speech
is not the proper target of moral evaluation. Rather, the ethicality of
human or robot communication must be assessed with respect to its
underlying motive.
e social importance of deception is a theme that emerges repeat-
edly in ctional portrayals of human– robot interaction. One com-
mon plot explores the dangers to society if socially engaged robots
lack the ability to detect and respond strategically to deception. For
instance, the lms Short Circuit 2 (), Robot & Frank (), and
Chappie () all depict naive robots misled into thievery by duplici-
tous humans. When robots are themselves deceivers, the scenarios
take on a more ominous tone, especially if human lives are at stake.
In Alien () the android Ash unreectively engages in a pattern
of deception mandated by its owners’ secret instructions. Ash’s inhu-
man commitment to the mission leads to the grotesque slaughter of
its crewmates. Regardless of whether Ash can recognize its actions as
11
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 157 3/31/2017 8:19:34 AM
158  ..    
CB.A Template Standardized - -  and Last Modied on --
deceptive, it can neither evaluate nor resist the ensuing pattern of behavior, which
has dire consequences for human life. 2001:ASpace Odyssey () provides a
subtler cinematic example in the computer HAL. HAL’s blind adherence to com-
mands to deceive, unlike Ash’s, is not what results in its murder of the human
crew. Instead, the fault rests in the computer’s inability to respond strategically—
humanely— to the mandate that the goal of the mission be kept secret. When the
demands of secrecy require HAL to lie in subtle ways, it instead turns to murder.
Ironically, it is HAL’s inability to lie eectively that leads to catastrophe.
More lighthearted depictions of social robots have found comedic traction in
the idea that the ability to deceive is a dening feature of humanity. For instance,
in the TV comedy Red Dwarf (), Lister, a human, teaches the robot Kryten
to lie in an attempt to liberate it from the inhuman aspects of its programming.
Conversely, Marvin (“the Paranoid Android”) in e Hitchhiker’s Guide to the
Galaxy (Adams )is characterized by its proclivity to tell the truth, even when
grossly socially inappropriate. Marvin’s gas may be funny, but they underscore
the importance of subtle falsehoods in upholding social decorum. Furthermore,
the most human (and heroic) ctionalized robots display a versatile capacity to
mislead. Star Wars’ () RD, for instance, repeatedly deceives the people
and robots around it, both subtly through omission and explicitly through out-
right lies, in service to the larger goal of conveying the plans for the Death Star to
rebel forces. is pattern of deception is part of what gives a robot that looks like
a trashcan on wheels the human element that endears it to audiences.
In the remainder of the chapter, we develop an account of how, why, and when
robots should deceive. We set the stage by describing some prominent categories
of duplicitous statements, emphasizing the importance of correct categorization
for responding strategically to deceptive speech. We then show that the concept
of an ulterior motive unies these disparate types of deception, distinguishing
them from false but not duplicitous speech acts. Next, we examine the impor-
tance of generating deceptive speech for maintaining a pro- social atmosphere.
at argument culminates in a concrete engineering example where deception
may greatly improve a robot’s ability to serve human needs. We conclude with a
discussion of the ethical standards for deceptive robots.
. Representing Deception
We care if we are deceived. But what does it mean to be deceived, and why do we
care about it? Intuitively, there is something “bad” about deception, but what is
the source of that “badness”? is section argues that we are troubled by deceit
because it is impelled by a covert goal, an ulterior motive. We need to detect
deception because only by identifying the goals of other agents, whether human
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 158 3/31/2017 8:19:34 AM
White Lies on Silver Tongues 159
CB.A Template Standardized - -  and Last Modied on --
or robot, can we respond strategically to their actions. is insight highlights a
specic challenge for savvy agents:eective detection of deception requires infer-
ences about the hidden motives of others.
11.1.1 Taxonomy ofDeception
e following scenario steps away from the world of robots to tease apart the
components of deception. In this interaction, Fred is a police detective investigat-
ing a crime; Joe is a suspect in the crime; and Sue is Joe’s manager atwork.
: Where was Joe on the morning of Marchrd?
: Joe was here at the oce, working at hisdesk.
Consider the most straightforward form of deceptive speech:lying. At a rst pass,
lying occurs when an agent willfully utters a claim that contradicts his beliefs.
e detective suspects Joe of having committed a crime on the morning of March
rd, so he cares where Joe happened to be. Is Sue lying about Joe’s location?
e answer is important not only because Fred cares about Joe’s location but also
because he cares about Joe’s accomplices. If Sue is lying to protect Joe, she may be
implicated in thecrime.
How can Fred tell whether Sue is lying? Suppose, for instance, the detec-
tive has access to security video from the morning of March rd at the scene
of the crime— that is, he knows that Sue’s claim that Joe was at the oce is false.
Is this fact enough to determine that Sue is lying? Not necessarily; for instance,
Sue might only have a false belief. Perhaps she saw a person who looks like Joe at
the oce and formed the erroneous belief that Joe was at work. Similarly, Sue
might be ignorant of Joe’s actual whereabouts. Maybe she has no evidence one
way or another about Joe’s presence, and her response is a rote report on assigned
employee activities for Marchrd.
e falsity of a statement is not, then, sucient evidence of lying. However,
combined with other evidence, for instance biometric cues such as excessive
sweat, dgeting, or failure to make eye contact, it may nevertheless allow Fred to
condently infer that Sue is indeed lying. But is determining correctly whether
or not Sue is lying enough for Fred’s purposes? Consider two possibilities:in one
scenario, Sue lies because she is in cahoots with Joe and has received a cut of the
loot; in the other, Sue lies because she was hungover the morning of March rd,
arrived late, and is scared of a reprimand for her unexcused absence. In both situa-
tions Sue is lying, but the appropriate response by the detective is radically dier-
ent. In the rst case, Fred might charge Sue with abetting a crime; in the second,
he may simply discount Sue’s testimony.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 159 3/31/2017 8:19:34 AM
160  ..    
CB.A Template Standardized - -  and Last Modied on --
e point of the example is twofold. First, detecting a lie requires knowledge
about more than a single statement’s accuracy. Alie depends constitutively on
the speaker’s state of mind, so her intent to deceive must be inferred. Second, the
correct response to a lie may turn on more than the mere fact that a lie has been
uttered. Crucially, the responder’s strategy may depend on the goal that moti-
vated the lie— it is this goal that underpins any intent to deceive. ese two key
features characterize other forms of deception identied in the literature. Here
we briey consider paltering, bullshit, and pandering (for a more in- depth treat-
ment, see Isaac and Bridewell).
Paltering (Schauer and Zeckhauser ; Rogers etal. )occurs when a
speaker misleads his interlocutor by uttering an irrelevant truth. Aparadigmatic
example is the used- car salesman who truthfully claims, “e wheels on this car
are as good as new,” to direct attention away from the poor quality of the engine.
Paltering illustrates that neither the truth- value of an utterance nor the speaker’s
belief about its truth are crucial factors for determining whether the utterance is
deceptive. Rather, the ethical status of speech may turn entirely on whether it is
motivated by a malicious intent to misdirect.
Bullshit (Frankfurt ; Hardcastle and Reisch )occurs when a speaker
neither knows nor cares about the truth- value of her utterance. Bullshit may be
relatively benign, as in a “bull session” or the exchanging of pleasantries around
the water cooler. However, there is malicious bullshit as well. Acondence man
may spew bullshit about his background and skills, but if people around him
believe it, the consequences may be disastrous. Frank Abagnale, Jr., for instance,
repeatedly impersonated an airline pilot to travel for free (events dramatized in
the  lm Catch Me If You Can), but his bullshit put lives at risk when he was
asked to actually y a plane and blithely accepted the controls.
Pandering is a particularly noteworthy form of bullshit (Sullivan ; Isaac
and Bridewell ). When someone panders, he (may) neither know nor care
about the truth of his utterance (hence, a form of bullshit), but he does care
about an audience’s perception of its truth. Apolitician who, when stumping
in Vermont, proclaims, “Vermont has the most beautiful trees on God’s green
earth!” does so not because she believes the local trees are beautiful, but because
she believes the local audience believes Vermont’s trees are beautiful— or, more
subtly, that the locals want visitors to believe their trees are beautiful.
Lying, paltering, bullshitting, and pandering are all forms of deception.
However, they are united neither by the truth- value of the utterance nor the
speaker’s belief in that utterance. Moreover, bullshitting and pandering may lack
even an intent to deceive. Rather, what unites these categories of perdy is the
presence of a goal that supersedes the conversational norm for truthful speech.
e nature of this goal, in addition to reliable detection and classication of
deception, is vital information for any agent forming a strategic response.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 160 3/31/2017 8:19:34 AM
White Lies on Silver Tongues 161
CB.A Template Standardized - -  and Last Modied on --
11.1.2 Theory ofMind, Standing Norms, and Ulterior Motives
What capacities does a robot require to identify and respond to the wide variety
of deceptive speech? An answer to this question is undoubtedly more complex
than one we can currently provide, but a key component is the ability to represent
the mental states of other agents. Socially sophisticated robots will need to track
the beliefs and goals of their conversational partners. In addition, robots’ repre-
sentations will need to distinguish between baseline goals that may be expected
of any social agent, which we call standing norms, and the special goals that super-
sede them, which we call ulterior motives.
When we claim that deception- sensitive robots will need to track the beliefs
and goals of multiple agents, we are stating that these robots will need a theory
of mind. is phrase refers to the ability to represent not only one’s own beliefs
and goals but also the beliefs and goals of others. ese representations of the
world may conict, so they must be kept distinct. Otherwise, one could not tell
whether one believed that lemurs make great pets or one believed that someone
else believed it. As an illustration, suppose that Sue and Joe, from the earlier
example, are accomplices. In that case, Sue believes that Joe was at the scene of
the crime but wants Frank to believe that Joe was at work. If she thinks her lie
was compelling, then Sue will form the belief that Frank believes that Joe was at
the oce— this is a rst- order theory of mind. Pandering requires a second- order
theory of mind. For instance, the politician must represent his (zeroth- order)
belief that the audience (rst order) believes that he (second order) believes the
trees in Vermont are beautiful (gure.).
Given a theory of mind rich enough to represent the beliefs and goals of other
agents several levels deep, what else would a robot need to strategically respond
to deception? According to our account, socially aware robots would need to
represent the covert motive that distinguishes deception from normal speech.
Fortunately, even though the particular motive of each deceptive act generally
diers (e.g., Sue’s goal in lying diers from that of the used- car salesman in palter-
ing), there is a common factor:a covert goal that trumps expected standards of
communication. erefore, to represent a deceptive motive, a robot must distin-
guish two types of goals:the typical and the superseding.
We call the rst type of goal a standing norm (Bridewell and Bello ), a
persistent goal that directs an agent’s typical behavior. For speech, Paul Grice
introduced a set of conversational maxims () that correspond to this notion
of a standing norm. His maxim of quality, frequently glossed as “be truthful,” is
the most relevant to our discussion. Grice argued that people expect that these
maxims will be followed during ordinary communication and that agrant viola-
tions cue that contextual inuences are modifying literal meaning. e crucial
point for our purposes is that be truthful is a goal that plausibly operates under
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 161 3/31/2017 8:19:34 AM
162  ..    
CB.A Template Standardized - -  and Last Modied on --
all typical circumstances. Other standing norms might regulate typical speech in
subtler ways (e.g., be polite or be informative).
In any complex social situation, more than one goal will be relevant. If these
goals suggest conicting actions, we will need some method to pick which goals
to satisfy and which to violate. For instance, imagine a friend who composes
saccharine poems to his dog asks your opinion on the latest one. e conict
between the standing norms be truthful and be polite may likely pose a dilemma.
If you are lucky, you may get away with satisfying both:“I can tell that you put a
lot of work into that poem. You must really love your corgi.” is response is both
truthful and polite, yet it achieves this end by avoiding a direct answer to your
Figur e 11.1. Pandering requires a second- order theory of mind. e successful pan-
derer believes (zeroth order) that the listener believes (rst order) that the speaker believes
(second order) his utterance. Image credit:Matthew E.Isaac.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 162 3/31/2017 8:19:34 AM
White Lies on Silver Tongues 163
CB.A Template Standardized - -  and Last Modied on --
friend’s question. Notice the dierence between this kind of misdirection and
paltering, however; we don’t typically think of an answer such as this as deceptive
because there is no hidden goal— it is governed entirely by the expected standards
of conversation.
If your friend presses you, demanding an explicit opinion on his poetry, you
will need some heuristic to determine which of your standing norms to vio-
late. One way to achieve this end is to prioritize your goals. Several factors—
situational, cultural, emotional— will determine which norm takes priority and
guides speech. Ranking truth over civility may lead to a brutal dismissal of your
friend’s literary skills. Ranking civility over truth may lead to false praise. Such
false praise is technically deceitful— we intend our friend to form a belief that
conicts with thetruth.
We oen categorize an utterance such as this, the false praise of a friend’s inept
poem, as a white lie. On the one hand, we recognize that it is technically an act
of deceit, because a goal has superseded the norm be truthful, and in this sense a
“lie.” On the other hand, since the superseding goal is itself a standing norm (in
this case be polite), it bears no malicious intent, and we do not typically judge the
lie to be morally reprehensible. e situation is dierent when the superseding
goal is not a standing norm. In that case, we refer to the goal prioritized over
a norm as an ulterior motive. e presence of a relevant ulterior motive dier-
entiates a maliciously deceptive utterance from a benign one. If you praise your
friend’s poetry, not because you prioritize the norm be polite, but because a goal
to borrow money from your friend supersedes all your standing norms, then we
would no longer judge your false praise to be morally neutral. Revisiting Sue, if
her false response to the detective is grounded in a false belief, she is not suppress-
ing a conversational norm and her error is innocent. However, if Sue has a goal to
protect Joe that supersedes the conversational norm be truthful, then her response
is deceptive. Other ulterior motives in the earlier examples include sell this car
and get elected.
To summarize, if a robot is to eectively recognize deceptive speech and
respond strategically to it, it must be able to represent (a)the dierence between
its own beliefs and desires and those of its interlocutor; and (b)the dierence
between standing norms of behavior and ulterior motives. Of course, other
capacities are also needed, but we claim that they will appeal to these representa-
tions. We next turn to the question of the “badness” of deception and argue that
some forms of deception are desirable, even in robots.
. Deceiving forthe GreaterGood
So far, we have seen that robots must possess a theory of mind in order to respond
eectively to deceptive communication and, in particular, the ability to identify
ulterior motives. But an eective social robot cannot treat all deceptive speech as
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 163 3/31/2017 8:19:34 AM
164  ..    
CB.A Template Standardized - -  and Last Modied on --
malign, or all agents who act on ulterior motives as malicious. Furthermore, such
a robot may nd itself following ulterior motives, and its (technically) deceptive
behavior may have a positive social function. Aer discussing examples of the
pro- social function of deceptive speech, we consider some cases where we want
robots to lietous.
11.2.1 Benign Deceptions
When we talk to each other, our words do far more than communicate literal
meaning. For instance, we routinely exchange pleasantries with co- workers. Two
people walk toward each other in a hallway, one asks how the other’s day is going ,
and the response is a casual “ne.” is sort of exchange reinforces social- group
membership. e colleagues recognize each other in a friendly way and the literal
content serves only a secondary function, if any. Other examples include “water
cooler” conversations about the weather or sports, or oce gossip.
Oen these casual conversations are forms of bullshit:conversants may nei-
ther know nor care about the truth, but the conversation goes on. Speculating
who will win the World Cup or whether there will be rain on Sunday seems
generally unimportant, but people talk about these topics routinely. In addition,
consider all the times that people exchange pleasantries using outright lies. For
instance, we might compliment a friend’s trendy new hairstyle, even if we think
it is hideous. In these cases, arming the value of peers can take priority over
conveying truth or engaging in debate. In fact, treating such pleasantries as sub-
stantive may lead to confusion and social tension. Responding to a co- worker’s
polite “Hi, how are you?” with “My shoulder aches, and I’m a bit depressed about
my mortgage” will, at the least, give the colleague a pause. Continued tone- deaf
responses will likely reduce the opportunities for reply. Treating a rote exchange
of pleasantries as legitimately communicative not only is awkward, but under-
mines the exchange’s pro- social function (Nagel).
Another common form of benignly deceptive speech includes metaphors and
hyperbole:“I could eat a horse”; “ese shoes are killing me”; or even “Juliet is
the sun. Arise, fair sun, and kill the envious moon.” ere is nothing malevolent
about these gures of speech, but to respond to them eectively requires an abil-
ity to recognize the gap between their literal content and the speaker’s beliefs.
If a chef served a full- sized roast equine to a customer who announced, “I could
eat a horse,” she would be greeted with surprise, not approbation. Exactly how
to compute the meaning of metaphorical and hyperbolic expressions is a vexed
question, but we can agree that these gures of speech violate standing norms of
the kind articulated by Grice (; Wilson and Sperber ), and thus techni-
cally satisfy the earlier denition of deceptive speech.
It would be fair to ask whether metaphorical speech is necessary for eective
communication. What does a turn of phrase add? Are there some meanings that
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 164 3/31/2017 8:19:34 AM
White Lies on Silver Tongues 165
CB.A Template Standardized - -  and Last Modied on --
can be conveyed only through metaphor, having no paraphrase into strictly literal
language (Camp )? Even if the answer is negative, metaphor and hyperbole
provide emphasis and add variety, color, and emotion to conversations. We would
describe someone who avoids gures of speech or routinely misunderstands them
as dicult to talk to, socially awkward, or even “robotic.
More complex social goals may also require systematic, arguably benign
deception; consider, for instance, impression management. In complex social
interactions, what others think about us, their impression of our character, is
oen important. How we appear to people in power or to our peers has very real
eects on our ability to achieve long- term goals and maintain social standing.
Managing these impressions to achieve broader goals may supersede norms of
truthfulness in conversation. For instance, suppose a worker wants his boss to see
him as loyal to her. To this end, he supports her attempts to close a small deal with
a corporate ally even though he disagrees with the content. His goal to manage
his appearance as a loyal employee motivates him to vote in favor of his boss’s deal
at the next meeting.
In this example, the long- term goal of demonstrating political solidarity
swamps short- term concerns about relatively unimportant decisions. Moreover,
disingenuously endorsing less important proposals in the short term may give
the employee the cachet to speak honestly about important deals in the future.
Whether one thinks the subtle politics of impression management are morally
permissible, they certainly play a major role in the complex give- and- take char-
acteristic of any shared social activity, from grocery shopping with one’s family
to running a nation- state. As we will see, simple impression management may be
required for practical robot applications in engineering.
11.2.2 When We Want RobotstoLie
Do we want robots that can banter about the weather and sports, compliment us
on a questionable new haircut, or generate appropriately hyperbolic and meta-
phorical expressions? (“is battery will be the death of me!”) If we want robots
that can smoothly participate in standard human modes of communication and
social interaction, then the answer must be yes. Even if our primary concern is
not with fully socially integrated robots, there are many specic practical appli-
cations for robot deception. Here we consider only two: the use of bullshit for
self- preservation and the use of systematic lies to manage expectations about
engineeringtasks.
11.2.2.1 Bullshit asCamouage
If a primary function of bullshit is to help one t into a social group, that func-
tion may be most important when one is not in fact a bona de member of the
group in question. Consider, for instance, the case of a computer scientist at a
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 165 3/31/2017 8:19:34 AM
166  ..    
CB.A Template Standardized - -  and Last Modied on --
sports bar. For him, the ability to bullshit about sports, to make the right kinds
of comments even if he neither knows nor cares about their truth, can mean the
dierence between treatment as a peer and humiliation or physical assault. In
situations like this, the ability to spew the right sort of bullshit acts as a form of
camouage, enabling the computer nerd to supercially t in with the people
aroundhim.
is sort of camouage is not always benign, but the skill may be vital to sur-
vival for some kinds of robots. e spy, the h columnist, and the terrorist all
use this technique, not because it is inherently nefarious, but because the ability
to t in can be critical for self- preservation. Arobot working in a hostile com-
munity or confronted by belligerent locals in the wrong part of town would ben-
et from the ability to bullshit judiciously, whether the hostiles themselves were
human or mechanical. Asocially sophisticated robot should be able to generate
appropriate signals to blend in with any conversational community and thereby
extricate itself from dangerous situations without violence or social friction.
In general, we should acknowledge that robots will inevitably be seen as part
of an out- group, as less than human, regardless of their social prowess. us, their
ability to bullshit their way into our hearts and minds may be key to their broad
acceptance into the workforce. As objects whose physical well- being may con-
stantly be under threat, robots might use bullshit eectively to provide a verbal
buer against prejudice.
11.2.2.2 Managing Impressions toManage Uncertainty
Our earlier example of impression management was political in avor, but the
basic concept has more mundane applications as well. In engineering, there is
a common practice of knowingly overstating the time it will take to complete a
task by a factor of two or three. is convention is sometimes referred to as the
Scotty Principle aer the character in Star Trek. is category of lying serves two
major functions. First, if the engineer nishes ahead of time, she looks especially
ecient for having delivered results sooner than expected (or she rests a bit and
reports an on- time completion). Second, and more important for our argument,
the engineer’s estimate creates a clandestine buer for contingencies that protects
her supervisor from making aggressive, time- sensitiveplans.
At a practical level, the Scotty Principle factors in the completely unex-
pected:not just any initial failure at assessing the intrinsic diculty of the task,
but also unforeseen extrinsic factors that might impact successful completion
(a strike at the part supplier’s warehouse, a hurricane- induced power outage,
etc.). Such “unknown unknowns” (those facts that we do not know that we do
not know) famously pose the greatest challenge to strategic planning. Unlike
known unknowns, which can be analyzed quantitatively and reported as con-
dence intervals or “error bars,” unknown unknowns resist any (meaningful) prior
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 166 3/31/2017 8:19:34 AM
White Lies on Silver Tongues 167
CB.A Template Standardized - -  and Last Modied on --
analysis— we cannot plan for a diculty we do not expect or even imagine. Yet
inserting a temporal buer between the known aspects of a repair job and those
that are unknown does in some way prepare us for the unexpected. Furthermore,
the practice acts as a deliberate corrective to the engineer’s own potential failings,
including any lack of self- knowledge about her skills and speed of work. Ironically,
then, willfully deceiving a supervisor may correct for an engineer’s self- deception.
is example was not picked arbitrarily. Engineering tasks, including the
repair of technical equipment or soware systems, are potential applications for
sophisticated robots. Do we want robotic engineers to lie to us systematically in
exactly this way? Plausibly, the answer is yes:if we want our best robot engineers
to meet the standards of our best human engineers, then we should expect them
to embrace the Scotty Principle. Nevertheless, we have encountered two promi-
nent objections to this line of reasoning, which we consider inturn.
e rst objection is that current human- engineering practice sets too low a
bar for assessing future robot engineers. We should expect to be able to improve
our robots until they correctly estimate their capabilities and thereby avoid the
need for recourse to Scotty’s Principle. Yet this idealized view of robot perfor-
mance belies the nature of engineering and the capacity for self- knowledge in
dynamic, complex environments. Contingencies arise in engineering, and novel
tasks stretch the scope of an engineer’s abilities, whether human or mechani-
cal. Aiming for a robot that predicts the results of its interventions in the world
with perfect accuracy is to aim for the impossible. Reliable engineering practice
requires accepting that unknown unknowns exist and preparing for the possibil-
ity that they may confront a project when it is most inconvenient.
e second, distinct worry arises for those who acknowledge the danger of
unknown unknowns, yet insist it is safer to leave contingency planning in the
hands of human supervisors than to permit robot engineers to systematically lie.
But this suggestion relies on an unrealistic assessment of human nature— one that
may be dangerous. Anyone who has worked with a contractor either profession-
ally or personally (e.g., when remodeling or repairing a house) will be familiar
with the irresistible tendency to take at face value that contractor’s predictions
about job duration and cost. If these predictions are not met, we hold the con-
tractor accountable, even if he has been perpetually tardy or over- budget in the
past— we certainly do not blithely acknowledge he may have faced unforeseen
contingencies. Yet this tendency is plausibly even greater in our interactions with
robots, which we oen assume are more “perfect” at mechanical tasks than in
fact they are. Paradoxically, if we insist all contingency planning must reside with
human supervisors, we will need to train them not to trust their robot helpers’
honest predictions of task diculty!
e apparent infallibility of robots is exactly why we must ensure that
socially sophisticated robots can misrepresent their predictions about the
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 167 3/31/2017 8:19:34 AM
168  ..    
CB.A Template Standardized - -  and Last Modied on --
ease of a job. is added step can correct for both the robot’s self- deception
about its abilities and its human user’s unrealistic optimism about robot
dependability.
. Ethical Standards forDeceptiveRobots
We have just suggested that robots will need the capacity to deceive if they are
to integrate into and contribute eectively to human society. Historically, how-
ever, many ethical systems have taught that lying and other forms of deception
are intrinsically wrong (e.g., Augustine ). If we aim for deception- capable
robots, are we giving up on the possibility of ethical robots? We argue that the
answer is a resounding no. is is because the locus of ethical assessment is not in
the content of speech, but in the ulterior motive that gives rise to it. erefore,
ensuring that a social robot behaves morally means ensuring that it implements
ethical standing norms and ranks them appropriately.
Precisely how is it possible for deception to be ethical? In section . we
argued that the unifying factor for all forms of deceptive speech is the presence
of an ulterior motive, a goal that supersedes standing norms such as be truthful.
Paradigmatic cases of morally impermissible ulterior motives are those involving
pernicious goals, such as the concealment of a crime or the implementation of a
condence scheme. In contrast, section . surveyed examples where deceptive
speech serves pro- social functions and the ulterior motives are pro- social goals,
such as boosting a colleague’s self- esteem or establishing trust. ere is a dis-
criminating factor between right and wrong in these cases; however, it depends
not on the deceptiveness of the speech per se but on the goals that motivate that
speech.
If this analysis is correct, it implies that an ethical, deception- capable robot
will need the ability to represent and evaluate ranked sets of goals. To draw an
example from literature, consider Isaac Asimov’s () ree Laws of Robotics,
an early proposal for a system of robot ethics. Asimov’s “laws” are really inviolable,
prioritized, high- level goals or, in our terminolog y, a ranked set of standing norms
combined with the stipulation that no ulterior motives may supersede them.
Notably, accepting this system means accepting deception by robots, because
Asimov’s stipulation ensures that his laws will be ranked above the norm be truth-
ful. For instance, the First Law is that “a robot may not injure a human being or,
through inaction, allow a human being to come to harm,” and the Second Law is
that “a robot must obey the orders given it by human beings except where such
orders would conict with the First Law.” Suppose a murderer comes to the door
of a home and orders a robot butler to tell him where his intended victim is hid-
ing. Here, the Second Law mandates that the robot answer, but to satisfy the First
Law it must lie to protect the victim’slife.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 168 3/31/2017 8:19:34 AM
White Lies on Silver Tongues 169
CB.A Template Standardized - -  and Last Modied on --
Asimov’s system is based on a set of duties or obligations, which makes it a
deontological ethical theory (Abney ). Traditionally, this approach has
produced the most severe criticisms of the permissibility of lying, so perhaps it
is comforting to see that deceptive robots may conform to a deontological eth-
ics. Immanuel Kant, the most famous deontologist, repeatedly and passionately
defended the extreme position that deceptive speech is never permissible under
any circumstance. He even addressed the foregoing scenario, where a murderer
asks someone where to nd a would- be victim. Kant () concluded that even
to save a life, lying is impermissible. Most ethicists have found this conclusion
distasteful, and nuanced discussions of the permissibility of lying oen proceed
by rst categorizing lies in terms of the respective ulterior motives that produce
them, then assessing whether these motives should indeed be allowed to supplant
the mandate for truthfulness (e.g., Bok).
e perspective outlined here is also compatible with consequentialism, the
main alternative to deontological theories. Consequentialists assess the morality
of an action on the basis of its consequences— good acts are those that produce
more of some intrinsically good property in the world, such as well- being or hap-
piness, while bad acts are those that produce less of it. How does this view evalu-
ate deceptive speech? If the speech has overall positive consequences, increasing
well- being or happiness, then whether or not it is deceptive, it is permissible, per-
haps even mandatory. e inuential consequentialist John Stuart Mill addressed
this topic:while placing a high value on trustworthiness, he nevertheless asserted
the permissibility to deceive “when the withholding of some fact would save
an individual from great and unmerited evil, and when the withholding can
only be eected by denial” (, ch.).
By shiing the locus of moral assessment to the speaker’s ulterior motives, we
have not made the problems of robot ethics any more complex; however, we have
also not made the problems any simpler. Adeontological specication of duties
or a consequentialist calculation of overall well- being remains equally challeng-
ing regardless of whether the robot may deceive. e computational deontolo-
gist remains burdened with questions about the relative priorities of norms or
goals, along with other general challenges related to formalizing ethical maxims
and using them to make inferences about an action’s morality (Powers ).
Likewise, the computational consequentialist still faces the challenge of deter-
mining and comparing the eects of potential actions (whether deceptive or not)
in any given situation. On this point, Keith Abney argues that a simple- minded
consequentialism “makes moral evaluation impossible, as even the short- term
consequences of most actions are impossible to accurately forecast and weigh”
(,).
To conclude, our basic proposal is that eective social robots will need the
ability to deceive in pro- social ways, an ability that may facilitate the integration
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 169 3/31/2017 8:19:34 AM
170  ..    
CB.A Template Standardized - -  and Last Modied on --
of android assistants into society, preserve robot safety in the face of prejudice,
and protect humans from our own misconceptions about the infallibility of our
mechanical helpmates. When assessing the ethicality of speech, the proper tar-
get for evaluation is the motivating goal, not the truth or falsity of the speech
per se. Consequently, permitting robots to lie does not substantively change the
technical challenges of ensuring they behave ethically. Nevertheless, there are
challenges distinctive to the design of a deception- capable robot, as it requires a
theory of mind and, in particular, the capacity to detect and reason about ulte-
rior motives.
Acknowledgments
We thank Paul Bello for discussions that helped inform the text. Will Bridewell
was supported by the Oce of Naval Research under grant N WX.
e views expressed in this chapter are solely the authors’ and should not be taken
to reect any ocial policy or position of the U.S.government or the Department
of Defense.
Notes
. ere is a technical literature on how best to dene lying, and one of the most debated
issues is whether an “intent to deceive” need be present (for a survey, see Mahon ).
On our view, an “intent to deceive” is just one possible instance of (or consequence of )
an ulterior motive; this analysis both avoids common counterexamples to the “intent”
condition and unies the denition of lying with that of other forms of deception.
. e crude beginnings of this challenge are already with us. For instance, self- driving
cars must nd a way to blend in on roads dominated by human drivers, and an acci-
dent or impasse may result from their blind adherence to the letter of trac law
(veridicality) and inability to interpret or send the subtle road signals required to t
in with the rest of trac (bullshit). Aclassic example is the four- way stop sign, where
self- driving cars have become paralyzed when none of the other cars come to a com-
plete stop. Eective navigation of such intersections requires coordinating behavior
through nuanced signals of movement, sometimes even blus, rather than strict def-
erence to the rules of right- of- way.
. roughout the Star Trek TV series, engineer Scotty routinely performs repairs
faster than his reported initial estimates. is phenomenon, and the Scotty Principle
itself, is explicitly acknowledged in the lm Star Trek III:e Search for Spock ():
: How much ret time until we can take her outagain?
: Eight weeks sir, but you don’t have eight weeks, so I’ll do it for you
intwo.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 170 3/31/2017 8:19:34 AM
White Lies on Silver Tongues 171
CB.A Template Standardized - -  and Last Modied on --
: Mr. Scott, have you always multiplied your repair estimates by a factor
offour?
: Certainly sir, how else can Ikeep my reputation as a miracle worker?
WorksCited
Abney, Keith. . “Robotics, Ethical eory, and Metaethics: A Guide for the
Perplexed.” In Robot Ethics: e Ethical and Social Implications of Robotics,
edited by Patrick Lin, Keith Abney, and George A. Bekey, – . Cambridge,
MA:MITPress.
Adams, Douglas. . e Hitchhiker’s Guide to the Galaxy. NewYork:HarmonyBooks.
Asimov, Isaac. . I, Robot. NewYork:Doubleday.
Augustine. () . “Lying.” In e Fathers of the Church (vol. :Saint Augustine
Treatises on Various Subjects), edited by Roy J. Deferreri, – . Reprint,
Washington, DC: Catholic University of AmericaPress.
Bok, Sissela. . Lying: Moral Choice in Public and Private Life. New York:
PantheonBooks.
Bridewell, Will and Paul F. Bello. . “Reasoning about Belief Revision to Change
Minds: A Challenge for Cognitive Systems.Advances in Cognitive Systems
:– .
Camp, Elisabeth. . “Metaphor and at Certain ‘Je ne Sais Quoi.’” Philosophical
Studies :– .
Frankfurt, Harry. () . On Bullshit. Princeton, NJ:Princeton UniversityPress.
Grice, H. Paul. . “Logic and Conversation.” In Syntax and Sematics 3:Speech Acts,
edited by Peter Cole and Jerry L. Morgan, – . NewYork:AcademicPress.
Hardcastle, Gary L. and George A. Reisch. . Bullshit and Philosophy.
Chicago:OpenCourt.
Isaac, Alistair M.C. and Will Bridewell. . “Mindreading Deception in Dialog.
Cognitive Systems Research :– .
Kant, Immanuel. () . “On a Supposed Right to Lie from Philanthropy.”
In Practical Philosophy, edited by Mary J. Gregor, – . Reprint,
Cambridge:Cambridge UniversityPress.
Mahon, James Edwin. . “e Denition of Lying and Deception.” In e Stanford
Encyclopedia of Philosophy, Spring  ed., edited by Ed N. Zalta. http:// plato.
stanford.edu/ archives/ spr/ entries/ lying- denition/ .
Mill, John Stuart. . Utilitarianism. London:Parker, Son &Bourn.
Nagel, omas. . “Concealment and Exposure.” Philosophy and Public Aairs
:– .
Powers, omas M. . “Prospects for a Kantian Machine.IEEE Intelligent Systems
:– .
Rogers, Todd, Richard J. Zeckhauser, Francesca Gino, Maurice E. Schweitzer, and
Michael I. Norton. . “Artful Paltering: e Risks and Rewards of Using
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 171 3/31/2017 8:19:34 AM
172  ..    
CB.A Template Standardized - -  and Last Modied on --
Truthful Statements to Mislead Others.” HKS Working Paper RWP- .
Harvard University, John F.Kennedy School of Government.
Schauer, Frederick and Richard J. Zeckhauser. . “Paltering.” In Deception:From
Ancient Empires to Internet Dating, edited by Brooke Harrington, – . Stanford,
CA:Stanford UniversityPress.
Sullivan, Timothy. . “Pandering.” Journal of ought :– .
Wilson, Deirdre and Dan Sperber. . “On Grice’s eory of Conversation.” In
Conversation and Discourse, edited by Paul Werth, – . London:CroomHelm.
OUP UNCORRECTED PROOF – FIRSTPROOFS, Fri Mar 31 2017, NEWGEN
oso-9780190652951.indd 172 3/31/2017 8:19:34 AM
... Second, roboticists need to provide better reasoning and justification for integrating standing norms (e.g., being polite) (Isaac and Bridewell 2017) into robots to make them socially integrated robots. So-called "standing norms" are norms or maxims that are expected to be followed during ordinary conversations. ...
... Typical examples of these standing norms including being kind, polite, clear, and honest. A crucial role of standing norms such as kindness is often instrumental for providing a human-friendly, effective environment that introduces "ulterior motives" or the special goals interpersonal conversations attempt to achieve (Isaac and Bridewell 2017). These standing norms are also crucial for ensuring that robots are socially integrated as they are often expected to be followed by actual humans in everyday interactions. ...
Article
Full-text available
Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.
... The practice of leveraging the power of biases in order to maximize the quality of interactions seems intimately connected to the idea that a seamless, almost "natural" introduction of artificial agents in the fabric of society must be promoted (Breazeal, 2003;Isaac & Bridewell, 2017;Eyssel & Kuchenbrandt, 2012). This implies a design strategy we call bias alignment. ...
Article
Full-text available
The increase in the spread of conversational agents urgently requires to tackle the ethical issues linked to their design. In fact, developers frequently include in their products cues that trigger social biases in order to maximize the performance and the quality of human-machine interactions. The present paper discusses whether and to what extent it is ethically sound to intentionally trigger gender biases through the design of virtually embodied conversational agents. After outlining the complex dynamics involving social biases, social robots, and design, we evaluate the ethics of integrating gender cues in conversational agents, analysing four different approaches to the problem. Finally, we suggest which approach in our opinion might have the best chances to reduce the negative effects of biases and discriminatory visions of gender dynamics.
... In particular, artificial intelligence (AI) has gained a strong momentum in the past decades, and this has caused the emergence of intelligent artificial and autonomous agents. The risks related to the sharing of knowledge and information as a social good posed by AI is that artificial autonomous agents might develop their own reasons to act deceptively as it is pointed out in [16] and, more recently, in [17,18]. AI has also seen an emerging interest in the problem of fake news and the potential ability of machines (i) to be used for fake news generation [19] and fake news detection [20,21]; or even (ii) to use higher-order cognitive mechanisms to manipulate the beliefs of others in order to deceive [22][23][24]. ...
Article
Full-text available
Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good— public goods games (PGGs)—with ideas from interpersonal deception theory (Buller and Burgoon 1996 Commun. Theory 6 , 203–242. ( doi:10.1111/j.1468-2885.1996.tb00127.x )) and truth-default theory (Levine 2014 J. Lang. Soc. Psychol. 33 , 378–392. ( doi:10.1177/0261927X14535916 ); Levine 2019 Duped: truth-default theory and the social science of lying and deception . University of Alabama Press). This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralized case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of the Digital Commons (Greco and Floridi 2004 Ethics Inf. Technol. 6 , 73–81. ( doi:10.1007/s10676-004-2895-2 )).
... Users might perceive a system's identity performance as deceptive if an observable is added to their LoA that reveals the identity as purely performance, but robot deception is not always bad. Scholars have argued that robots actually ought to deceive people in certain ways, like benign prosocial "bullshitting" to ingratiate robots with interactants, despite the human intuition that there is something undesirable about being deceived [10,20] (although cp. [19]). ...
Conference Paper
Full-text available
This paper explores (1) how robots and multi-robot systems can perform identity specifically for human benefit, (2) the factors that impact how humans perceive robot identity and its connection with mind and body, and (3) possible implications of designing non-traditional identity configurations. In particular, we explore the unique ways that identity may be performed in multi-robot systems , and examine arguments for and against designing multi-robot systems to perform identity in ways that diverge from or obscure the distributed nature of those robots' cognitive architecture.
... While deception is mentioned here as a concern, several authors point out that social robot deception may in fact be beneficial and even a moral imperative if it leads to good consequences [59,60]. Meacham and Studley [31] note that human carers will also often be somewhat deceptive, and Isaac and Bridewell [61] note that robots need to deceive in order to be beneficial. They emphasise the fact that much deception is both benign and pro-social. ...
Article
Full-text available
Should we deploy social robots in care settings? This question, asked from a policy standpoint, requires that we understand the potential benefits and downsides of deploying social robots in care situations. Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and experienced satisfaction. There are, however, important objections to the use of social robots in care. These include the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive. I propose a framework for evaluating all these arguments in terms of three aspects of care: structure, process, and outcome. I then highlight the main ethical considerations that have to be made in order to untangle the web of pros and cons of social robots in care as these pros and cons are related the trade-offs regarding quantity and quality of care, process and outcome, and objective and subjective outcomes.
Chapter
In 2000, it was predicted that artificially intelligent agents would inevitably become deceptive. Today, in a world seemingly awash with fake news and in which we hand over control of our home environments to faux-human smart devices, it is timely to review the types of deception that have actually emerged. By reference to examples from diverse branches of AI, we classify research on deception into five novel categories, which we describe according to the human characteristic with which they most closely align: imitating, obfuscating, tricking, calculating and reframing. We offer this as a way for those within AI to recognise connections across the discipline and to suggest how AI-instigated deceptions may be understood by those outside.
Preprint
In many contexts, lying -- the use of verbal falsehoods to deceive -- is harmful. While lying has traditionally been a human affair, AI systems that make sophisticated verbal statements are becoming increasingly prevalent. This raises the question of how we should limit the harm caused by AI "lies" (i.e. falsehoods that are actively selected for). Human truthfulness is governed by social norms and by laws (against defamation, perjury, and fraud). Differences between AI and humans present an opportunity to have more precise standards of truthfulness for AI, and to have these standards rise over time. This could provide significant benefits to public epistemics and the economy, and mitigate risks of worst-case AI futures. Establishing norms or laws of AI truthfulness will require significant work to: (1) identify clear truthfulness standards; (2) create institutions that can judge adherence to those standards; and (3) develop AI systems that are robustly truthful. Our initial proposals for these areas include: (1) a standard of avoiding "negligent falsehoods" (a generalisation of lies that is easier to assess); (2) institutions to evaluate AI systems before and after real-world deployment; and (3) explicitly training AI systems to be truthful via curated datasets and human interaction. A concerning possibility is that evaluation mechanisms for eventual truthfulness standards could be captured by political interests, leading to harmful censorship and propaganda. Avoiding this might take careful attention. And since the scale of AI speech acts might grow dramatically over the coming decades, early truthfulness standards might be particularly important because of the precedents they set.
Article
Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where users have domain and task models that differ from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a “model reconciliation problem” (MRP), where the AI system in effect suggests changes to the user's mental model so as to make its plan be optimal with respect to that changed user model. We will study the properties of such explanations, present algorithms for automatically computing them, discuss relevant extensions to the basic framework, and evaluate the performance of the proposed algorithms both empirically and through controlled user studies.
Chapter
Communicative interactive POMDPs (CIPOMDPs) provide a principled framework for optimal interaction and communication in multi-agent settings by endowing agents with nested models (theories of mind) of others and with the ability to communicate with them. In CIPOMDPs, agents use Bayes update to process their observations and messages without the usual assumption of cooperative discourse. We propose a variant of the point-based value iteration method, called IPBVI-Comm, to compute the approximate optimal policy of a CIPOMDP agent. We then use the IPBVI-Comm to study the optimal communicative behavior of agents in cooperative and competitive scenarios. Unsurprisingly, it is optimal for agents to attempt to mislead if their preferences are not aligned. But it turns out the higher depth of reasoning allows an agent to detect insincere communication and to guard against it. Specifically, in some scenarios, the agent is able to distinguish a truthful friend from a deceptive foe based on the message received.
Article
Full-text available
Günümüzde robotlar hizmet sektöründen, askeri ve güvenlik alanlarına, araştırma ve eğitimden sağlık sektörü ve eğlenceye, kişisel bakım hizmetlerinden arkadaşlık ve cinsellik konularına kadar yaşamın hemen her alanında artan ve çeşitlenen bir biçimde yer almaktadır. Robot endüstrisinin sağladığı bu yaygınlık ve çeşitlilik kaçınılmaz bir şekilde insan-robot etkileşimini ve bu etkileşimin yol açtığı etik sorunları ve açmazları gündeme getirmektedir. Çağdaş İngiliz yazar Ian McEwan'ın Benim Gibi Makineler (2019) romanı da alternatif bir 80'ler Londra'sında yapay zekâ araştırmalarının ve robotik ürünlerin çığır açtığı bir dönemde geçmektedir. Bilinç, özgür irade ve amaçlılık gibi özelliklere sahip olan ilk insansı Âdem ve Havva'ların insanlarla etkileşiminde ortaya çıkan etik sorunlar romanın merkezinde yer almaktadır. Bu çalışma, adı geçen insansı robotu James H. Moor'un "ahlaki faillik" (2011) taksonomisi açısından ele almakta, ardından üç felse yaklaşım kullanarak (teleolojik etik teori olarak utilitaryanizm, deontolojik teori olarak Kant'ın kategorik zorunluluk kavramı ve ilk bakışta görev yaklaşımı) insan-robot etkileşiminin doğurduğu etik sorunları incelemektedir. Bunun yanı sıra, etik sorunlar bağlamında Asimov'un robot kanunları ve Immanuel Kant'ın (1724-1804) "kopernik dünyanın dönüşü"nün insan-robot ilişkisindeki yeri ele alınmaktadır. Ayrıca, romandaki insan-robot etkileşimi insan merkezli bir özneyi önceleyen bakış açısına meydan okumakta ve insan lehine olan güç dengesini de tartışmaya açmaktadır. Today, robots are increasingly diverse in almost every aspect of life, from the service industry to the military and security elds, research and education to healthcare and entertainment, personal care to friendship and sexuality. This prevalence and diversity provided by the robot industry inevitably foregrounds the human-robot interaction and the ethical problems and dilemmas caused by this interaction. Contemporary British author Ian McEwan's novel Machines Like Me (2019) is also set in an alternative 80s London at a time when articial intelligence research and robotic products break new ground. The ethical problems that arise in the interaction of the rst humanoid Adam and Eve, who have characteristics such as consciousness, free will and purposefulness, are at the center of the novel. This study discusses the aforementioned humanoid robot in terms of James H. Moor's "moral agency" taxonomy, then using three philosophical approaches (utilitarianism as a teleological ethical theory, Kant's concept of categorical necessity as a deontological theory, and the prima facie duty approach).-examines the ethical problems arising from the robot interaction. Besides, in the context of ethical issues, the robot laws of Asimov and the concept of Kant's "Copernican turn "in the human-robot relationship are discussed. Furthermore, the human-robot interaction in the novel challenges the point of view that prioritizes a human-centered subject and questions the balance of power which is in favor of the human. Öz Makale Bilgisi 265 Giriş Günümüzde robotlar hizmet sektöründen, askeri ve güvenlik alanlarına, araştırma ve eğitimden sağlık sektörü ve eğlenceye, kişisel bakım hizmetlerinden arkadaşlık ve cinsellik konularına kadar yaşamın hemen her alanında artan ve çeşitlenen bir biçimde yer almaktadır. Robot endüstrisinin sağladığı bu yaygınlık ve çeşitlilik kaçınılmaz bir şekilde insan-robot etkileşimini ve bu etkileşimin yol açtığı etik sorunları ve açmazları gündeme getirmektedir.
Article
Full-text available
Questions central to the philosophical discussion of lying to others and other-deception (interpersonal deceiving) may be divided into two kinds. Questions of the first kind are definitional (or conceptual). They include the questions of how lying is to be defined, how deceiving is to be defined, and whether lying is always a form of deceiving. Questions of the second kind are normative (more particularly, moral). They include the questions of whether lying and deceiving are (either defeasibly or non-defeasibly) morally wrong, whether lying is morally worse than deceiving, and whether, if lying and deception are defeasibly morally wrong, they are merely morally optional on certain occasions, or are sometimes morally obligatory. In this entry, we only consider questions of the first kind.
Article
Full-text available
This paper considers the problem of detecting deceptive agents in a conversational context. We argue that distinguishing between types of deception is required to generate successful action. This consideration motivates a novel taxonomy of deceptive and ignorant mental states, emphasizing the importance of an ulterior motive when classifying deceptive agents. After illustrating this taxonomy with a sequence of examples, we introduce a Framework for Identifying Deceptive Entities (FIDE) and demonstrate that FIDE has the representational power to distinguish between the members of our taxonomy. We conclude with some conjectures about how FIDE could be used for inference.
Article
Soulevant la question de la dissimulation comme condition de la civilisation a travers l'exemple de l'exposition publique de la vie privee des hommes politiques aux Etats-Unis, l'A. etablit une analogie entre le probleme liberal de la balance des interets individuels et collectifs, d'une part, et le probleme social de la juste mesure entre la reserve de la vie privee et l'interaction publique, d'autre part. Defendant l'idee de restrictions fonctionnelles en faveur de la protection de la vie privee dans une culture paradoxalement tres tolerante envers la sexualite, l'A. montre que le probleme reside dans la frontiere normative entre ce qui doit etre cache et ce qui doit etre montre, entre la reticence et la reconnaissance, la politesse et la deference, l'interiorite et l'exteriorite, au regard de ce qui est acceptable du point de vue de l'individu et de la societe appartenant a une culture pluraliste
Article
A lie involves three elements: deceptive intent, an inaccurate message, and a harmful effect. When only one or two of these elements is present we do not call the activity lying, even when the practice is no less morally questionable or socially detrimental. This essay explores this area of “less-than-lying,” in particular intentionally deceptive practices such as fudging, twisting, shading, bending, stretching, slanting, exaggerating, distorting, whitewashing, and selective reporting. Such deceptive practices are occasionally called “paltering,” which the American Heritage Dictionary defines as acting insincerely or misleadingly. The analysis assesses the motivations for, effective modes of, and possible remedies against paltering. It considers the strategic interaction between those who palter and those who interpret messages, with both sides adjusting their strategies to account for the general frequency of misleading messages. The moral standing of paltering is discussed. So too are reputational mechanisms – such as gossip – that might discourage its use. Paltering frequently produces consequences as harmful to others as lying. But while lying has been studied throughout the ages, with penalties prescribed by authorities ranging from parents to philosophers, paltering – despite being widespread - has received little systematic study, and penalties for it even less. Given the subtleties of paltering, it is often difficult to detect or troubling to punish, implying that it is also hard to deter. This suggests that when harmful paltering is established, the sanctions against it should be at least as stiff as those against lying.