ArticlePDF Available

The future of online trust (and why Deepfake is advancing it)



Trust has become a first-order concept in AI, urging experts to call for measures ensuring AI is ‘trustworthy’. The danger of untrustworthy AI often culminates with Deepfake, perceived as unprecedented threat for democracies and online trust, through its potential to back sophisticated disinformation campaigns. Little work has, however, been dedicated to the examination of the concept of trust, what undermines the arguments supporting such initiatives. By investigating the concept of trust and its evolutions, this paper ultimately defends a non-intuitive position: Deepfake is not only incapable of contributing to such an end, but also offers a unique opportunity to transition towards a framework of social trust better suited for the challenges entailed by the digital age. Discussing the dilemmas traditional societies had to overcome to establish social trust and the evolution of their solution across modernity, I come to reject rational choice theories to model trust and to distinguish an ‘instrumental rationality’ and a ‘social rationality’. This allows me to refute the argument which holds Deepfake to be a threat to online trust. In contrast, I argue that Deepfake may even support a transition from instrumental to social rationality, better suited for making decisions in the digital age.
1 3
AI and Ethics
The future ofonline trust (and why Deepfake isadvancing it)
Received: 21 December 2020 / Accepted: 13 June 2021
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021
Trust has become a first-order concept in AI, urging experts to call for measures ensuring AI is ‘trustworthy’. The danger
of untrustworthy AI often culminates with Deepfake, perceived as unprecedented threat for democracies and online trust,
through its potential to back sophisticated disinformation campaigns. Little work has, however, been dedicated to the examina-
tion of the concept of trust, what undermines the arguments supporting such initiatives. By investigating the concept of trust
and its evolutions, this paper ultimately defends a non-intuitive position: Deepfake is not only incapable of contributing to
such an end, but also offers a unique opportunity to transition towards a framework of social trust better suited for the chal-
lenges entailed by the digital age. Discussing the dilemmas traditional societies had to overcome to establish social trust and
the evolution of their solution across modernity, I come to reject rational choice theories to model trust and to distinguish
an ‘instrumental rationality’ and a ‘social rationality’. This allows me to refute the argument which holds Deepfake to be a
threat to online trust. In contrast, I argue that Deepfake may even support a transition from instrumental to social rationality,
better suited for making decisions in the digital age.
Keywords Trust· Deepfake· Disinformation· Fake news· AI ethics
1 Introduction
Trust has become a particularly trendy concept in AI. Nowa-
days, most major technology companies claim their com-
mitment to building a ‘trustworthy AI’ while social media
and governments worry about ensuring trust in online infor-
mation. The European Commission even formed an expert
group to write ‘ethics guidelines for trustworthy AI’ [22] to
establish an initial framework for regulating the development
of AI in the EU. The danger of untrustworthy AI culmi-
nates with Deepfake, often presented as the gravedigger of
online trust. This solution, which notably permits to create
synthetic videos of existing people, is widely perceived as
a deadly threat to democracies, given its potential of serv-
ing sophisticated disinformation campaigns [10, 41, 43],
manipulate elections [36] and annihilate any trust in online
information [40, 42], thereby paving the way for a nihilist
post-truth world [13]. But what actually is trust? Usually left
aside, this question happens to be trickier than it seems, and
its complexity is testament of a rich evolution in its theory
and manifestation across societies.
In this paper, I mobilise anthropological and philosophi-
cal theories of trust to defend an unconventional position:
not only is Deepfake not a threat for online trust, but it
could even represent the critical ally we need to promote
trust in the digital age. The first section lays out the original
dilemma of building trust, presents the solution found by
traditional societies, and how trust evolved across politi-
cal systems up to modern theories thereof—leading me to
formulate three conclusions on trust. The second section
criticises the modern rational theories of trust, presenting
three main arguments against the suitability of the rational
choice theory to model trust and prompting me to consider
an opposition between two types of rationality. The third
section breaks down the argument justifying Deepfake as
a unique threat to online trust, and individually refutes its
three components. It then provides reasons for switching
from instrumental to social rationality when making deci-
sions in the digital age, and explains how Deepfake supports
such a transition.
* Hubert Etienne
1 Facebook AI Research, Paris, France
2 Department ofPhilosophy, Ecole Normale Supérieure, Paris,
3 Sorbonne Université, LIP6, Paris, France
AI and Ethics
1 3
2 The social dilemma oftrust:
fromtheenabling ofpositive circularity
tothemanagement ofrisks
2.1 Solving thesocial dilemma oftrust toenable
positive circularity
The apparent familiarity we cultivate with the concept of
trust justifies the awkwardness experienced when it comes
to defining it, a question we might be tempted to answer in
the Augustinian way: ‘If nobody asks me, I know: but if I
were desirous to explain it to one that should ask me, plainly
I know not’ ([5], 239). We may then try to approach the
question through other angles: Whom do we trust? Someone
‘trustworthy’. When do we trust them? When trust is needed.
However, and as intuitive and circular as these replies can
be, not only are they useless in grasping a better understand-
ing of trust, but also can be widely refuted by the reality of
social interactions. Obviously, I do not mean the same thing
when saying ‘I trust people not to murder me when I walk
in the street’, or ‘I trust the news published by the Guard-
ian’ or ‘I trust my friend to keep my secret’. While I have no
idea whether people on the street are actually trustworthy,
in many cases, I also have no need to confide a secret. The
etymology of the French word for trust (confiance deriv-
ing from the Latin confidentia) then permits to rationalise
the semantic explosions of expressions around one original
word, fidere, from which derives se fier à quelqu’un (trust
someone), or confier quelque chose à quelqu’un (entrust
someone with something) who is fidèle (faithful) or digne de
confiance (trustworthy). More importantly, it reveals a key
connection between fidere—which has also provided avoir
foi en (have faith in)—and credere—from which derives
croire (believe), donner credit à (give credit to) or crédible
(credible)—as trust somewhat involves the mediation of a
transcendent order.
The relationship between inter-human trust and faith in
transcendent divinities was found by anthropologists, when
investigating the first dilemma traditional societies had to
overcome to exist—trust referring to both an absolute neces-
sity and a practical impossibility. The existenceof asocial
system is conditioned bythe development of non-destructive
interactionsbetween different communities, including the
exchange ofgoods through the gifts and counter-gifts logic
[32]or family members through the alliance theory [29].
These interactions do not only result from individual deci-
sions but are mainly enabled and driven by wider circular
dynamics at the social system’s scale, which enjoys a certain
autonomy over that of its members. These dynamics can
then be either positive as illustrated by the gift and coun-
ter-gift theory (the ontological debt received by someone
when accepting a gift triggers a whole dynamic of positive
indirect reciprocity), or negative as exemplified by the auto-
generative logic of vengeance (the duty to murder the one
who has murdered creates a new murderer, calling for a new
vengeance and resulting in an endless negative reciprocity of
violence) and the development of society is conditioned by
its success in finding means to defuse negative reciprocities
and enable positive ones [4]. In both cases, this implies a
passage to the transcendental level and an authentic belief.
Sacrifices of goods (potlatch) and people (sacri-
fices)allows containment of the effects of the ‘mimetic
desire’ [19], within and between tribes according to René
Girard, resulting in the production of divinities with whom
the group can establish a relationship through cathartic ritu-
als, thus preventing its own destruction. For Mark Anspach,
the power a group acquires over itself to counter the dynamic
of revenge is given by the reification of vengeance itself, and
the possibility of pre-empting the sacrifice: killing an inno-
cent person instead of ‘the person who had killed’, allows
for an exit of the vicious circle of vengeance as illustrated by
the story of Sulkana in Pays Moussey [17]. Both interpreta-
tions converge on the subterfuge developed by traditional
tribes to keep violence in check by reifying it as a third party
and establishing a ritualised relationship with it, based on a
genuine belief, to live together more harmoniously.
Once negative reciprocity is defeated, the establishment
of a positive reciprocity for a group to prosper comes by the
enabling of a gift economy. Gifts are in many ways like sac-
rifices—or rather ‘auto-sacrifices’ as ‘we give ourselves in
giving’ says Mauss ([30], 125),1 remarking the ontological
dimension implied by the material gift which binds people
together at the transcendent level—but implies a reversed
temporality. Whereas vengeance comes in response to an
anterior action (a murder) in the name of justice, the gift
anticipates reciprocity and triggers it: it calls for a reaction.
This latter cannot be a direct counter-gift to the donor, but
instead take the form of a gift to another person, as part of a
wider indirect reciprocity scheme at the social group level.
‘We do not give to receive in return: we give so that the other
also gives’ ([25], 1415),2 that is to say to establish a relation-
ship, which would otherwise be closed by the reception of
a direct counterparty.
Here comes the dilemma: if, by definition, the gift has to
be spontaneous (i.e. purely disinterested), how can the giver
know that it will effectively lead to an indirect counter-gift,
and then initiate a virtuous circle of positive reciprocity?
From the receivers’ view, the spontaneity of the gift seems
to convey a double bind: on the one hand an obligation to
give something back, and on the other hand the impossibility
1 My translation of ‘on se donne en donnant’.
2 My translation of ‘On ne donne pas pour recevoir: on donne pour
que l’autre donne’.
AI and Ethics
1 3
of accepting this message without denaturing the gift itself.
The receiver then faces two contradictory signals: a mes-
sage saying, ‘I present you with a gift’ and a meta-message
saying ‘you need to give something back’. This dilemma is
then to be overcome by the introduction of a third party, the
Hau (the spirit of the gift) for the Maori, which ensures the
reciprocity of the gift while maintaining its spontaneity, by
dissociating the sender of the two messages: the donor sends
the message and the third party sends the meta-message [4].
More than just reflecting the social interaction on the meta-
level, the third party that emerges from the interaction of the
social group transforms it through a mechanism of ‘auto-
transcendence’, which enables trust within society as long
as they keep faith in the transcendent entity.
2.2 The modern conception oftrust grounded
inrational choice theory
According to Girard and Anspach, the forms of exchanges
and the types of third parties have evolved across ages, but
the structure of social trust remains unchanged. Originally
established on the belief in spirits, it became faith in a
uniqueGod and, by delegation, in its terrestrial lieutenant,
the monarch of divine right. With the end of political the-
ology, the advent of Modernity led to a major shift in the
perception of social order and the approach of the future.
From natural and divine, the social order is now perceived
as a human institution resulting from the interactions of free
agents with unpredictable behaviours, thus calling for a need
to reposition trust on a new basis [12]. The future becomes
all the more synonymous with uncertainty that it is no longer
ruled by tradition, and that the mode of social interaction
progressively switches from ‘familiarity’ to ‘anonymity’
[29]. This is when the contract theoreticians acknowledge
the role of the State as a third party to set up the conditions
for trust to make social life possible in a territory ruled by
law. This exigency first relates to the confidence that any
attempt against one’s life [23] or predation against one’s
goods [28, 34] would be severely punished, then more gener-
ally to the sacralisation of all contracts made legally between
individuals in a society.
It is worth noting here that despite the great difference
between ThomasHobbes and Jean-JacquesRousseau’s
anthropology, the mechanisms playing in the construction
of such trust are fundamentally rational. With the State being
charged to enforce the consequences of individuals’ actions
under all circumstances in an impersonal way—sanctioning
an assassin is not a personal act of vengeance, but a collec-
tive reply to crime against all—the target goal is that the
certainty and the severity of the sanctions operate as an ex
ante regulatory mechanism to discourage attempts to break
the law. Rational expectations are at the core of Hobbes’
conception of the State, promising negative incentives to
extinguish opportunities to free ride. By doing so, he tends
to substitute systematic general distrust (war of all against
all) with a systematic common trust (the impossibility of a
war of all against all) through a promise of mutually assured
destruction, which remains relevant today in nuclear dissua-
sion doctrine. Rousseau goes even further towards modern
economic rational thinking by explicitly presenting his pac-
tus associatis under a costs and benefits scheme: ‘What man
loses with the social contract is natural freedom and […]
what he earns is civil liberty and the property of everything
he owns’ ([34], 38).3
Nevertheless, Hobbes and Rousseau’s conception of the
State as a third party does not relate to trust itself (Hobbes
remains suspicious of the State and devises an exit clause in
the case it would turn against himself)—but to the conditions
for the development of trust between individuals. It also rests
on the assumptions that the State has both the right inten-
tions (is not corrupted) and the effective capacity (power) to
find contract breakers and to sanction them accordingly. Not
only do people lack the same trust in their political systems
nowadays, but all betrayals are also not illegal, as falling
under the State’s jurisdiction. The need to refine the theory
of trust to decentralise it from the orbit of the State, and to
extend the scope of social interactions it can account for,
found in the rational choice theory a promising pathway.
Anthony Giddens notes that ‘the first situation calling for
a need to trust is not the absence of power, but the insuffi-
ciency of information’ ([18] 1990, 40),4 or rather a situation
of imperfect information between full omniscience and per-
fect ignorance, as ‘who knows everything does not need to
trust, who does not know anything cannot reasonably trust
adds Georg Simmel ([37], 356).5 In addition to being a poor
substitute to good information, trust would be entirely final-
ised, characterising relationships between rational agent who
only trust each other when they have an interest to, expect
some benefits for themselves [16], and anticipate a rational
interest from others to be trustworthy in the right way, at the
right moment [21]. The influence of rational choice theory
has been so important that trustworthiness nowadays seems
to be associated with a simple absence of rational antagonist
interests, like when situations call for the arbitration of a
third party, supposed ‘trustworthy’ on the sole basis it has
no a priori direct interest at stake.
3 My translation of ‘Ce que l’homme perd par le contrat social, c’est
sa liberté naturelle & […] ce qu’il gagne, c’est la liberté civile & la
propriété de tout ce qu’il possède’.
4 My translation of ‘la première situation exigeant un besoin
de confiance n’est pas l’absence de pouvoir, mais l’insuffisance
5 My translation of ‘celui qui sait tout n’a pas besoin de faire confi-
ance, celui qui ne sait rien ne peut raisonnablement même pas faire
AI and Ethics
1 3
From a precious social good, trust would have become a
blemish associated with a situation of involuntary vulner-
ability in a context of poor information to be avoided at all
costs. As a mechanism to control risks and uncertainty, we
would then mainly reach to trust in case of strict necessity,
in a situation of alignment of rational interests (e.g. when
walking safely on the street without expecting anyone to
assault me, or when a creditor lends money to a debtor), or
by convenience, as a mixture of both of these two: when I
trust a doctor to perform a medical surgery, assuming that
filling the competency gap to do it myself would be much
too costly. This is also the case when I trust a newspaper to
convey news that is properly verified by its columnists, sup-
posing the business interest of the company in only sharing
good quality information, and assuming that fact-checking
everything myself would have a higher cost than the value
of the information itself.
3 The instrumental rationality ofreliance
andthesocial rationality oftrust
All these approaches grounded in rational choice theory,
however, should be rejected for three reasons. First, they are
based on a confusion between trust and expectation. Sec-
ond, they are invalidated by the reality of social interactions.
Third, they fail to recognise trust as an objective in itself.
3.1 The independence oftrust withthedegree
There is no harm in recalling that rational agents are only a
radical simplification of human’s decision-making process.
The theory is based on a tripod, including a recursive meta-
cognitive knowledge (the agent makes decisions based upon
certain principles and is aware of this cognitive process),
a projective metacognitive knowledge (all other agents are
also supposedly rational, thus making decisions based on
the same principles) and information about the evolution of
the system (deriving from the observation of the environ-
ment and other agents’ behaviours). The financial markets, in
theory, relatively suit this description and this is why rational
choice theory can be helpful here to model economic behav-
iour. Such an environment is said to be relatively efficient
because all agents are supposed to access the exact same
information, process it in a similar way and aim for the same
unique objective: profitability. However, there is no trust
playing in the market, only rational decisions made on the
basis of available information proceeding from more or less
long-term strategies and more or less risk-aversion. Irra-
tional perturbations are said to come from non-professional
investors (the famous fear of the trading housewife), human
mistakes or psychology (fat finger, aversion loss biases) and
market abuses (rumours, inside trading), that is to say from
human factors, justifying their replacement by algorithmic
trading. In real life, people are only partially rational as illus-
trated by the extensive literature on cognitive biases (e.g.
[24]), the extent of their desires largely exceeds that of their
economic interests and the dynamics at stake in social inter-
actions are much more complex than the macroeconomic
laws of the market. This is what prompts Jean-Pierre Dupuy
to conclude that “the concept of ‘equilibrium’ imported from
rational mechanics by the market theory is not suitable to
characterise the ‘attractors’ of mimetic dynamics” ([15], 71),
playing at the heart of social systems.
A rational agent is, by definition, purely rational. It makes
decisions based on available information, which is processed
through calculation rules, aiming for expected consequences
that maximise its objectives. Its cognitive process does
not vary with the degree of information available, so that
a lack of information would automatically make it switch
to another decision mode, that of trust. Would we want to
integrate trust relationships between agents in a simulation,
it would be represented by a variable attributing different
weights to each agent, modifying the probability distribu-
tion for each one to be expected to become adversarial under
specific circumstances, or the credibility of their announce-
ments. Such variables would, however, be deemed to remain
an externality which the agent cannot access by itself, nor
modify, but only receive and integrate in its calculations. In
other words, it would modify the agent’s rational expecta-
tions, not replace them, and if Ludovic does not trust Lau-
rence in general circumstances, he will not start trusting him
in a critical situation, where information is lacking. This
is why we cannot talk about trust in a situation of strict
necessity—when an agent’s fate is completely dependent on
another’s will—because there is no choice. We can call this
uncertainty and Ludovic may hope that Laurence takes the
decision that would be favourable to him, but there is no
trust at play here.
Likewise, it would be erroneous to invoke trust insitu-
ations where agents perceive they have aligned interests.
Here again, what is at stake is nothing else than rational
expectations because of the metacognitive assumption of
rational choice theory: Ludovic predicts Laurence’s behav-
iour because he assumes Laurence is rational, has access
to the same information, and also assumes that Ludovic is
rational himself. Only the metacognitive assumption enables
both agents to realise they have an interest in collaborating
to maximize their chances to reach their objective. Hardin
and Gambetta’s trust then is no more than rational expecta-
tions leading to a behavioural ‘synchronisation, rather than
a trust relationship. One may argue here that a true alliance
can exist between agents as ‘objective allies’, when objec-
tives are sufficiently far away, so that were Ludovic to be
temporarily vulnerable, Laurence would refrain from taking
AI and Ethics
1 3
advantage of such a situation though he could. However,
here again, Laurence would not refrain from benefiting from
Ludovic’s situation for the sake of loyalty, because he is
trustworthy, but solely because the optimum scenario for
him satisfying his objectives requires Laurence not to take
advantage of it. We cannot even talk about an alliance here—
for which Laurence would sacrifice his short-term interests
to keep Ludovic as an ally in the long run to increase his
chances of reaching a higher gain—because there is no such
thing as an alliance or retaliation for purely rational agents,
but only synchronisation. In fact, Laurence could be entirely
opportunistic, taking advantage of Ludovic’s weakness if
he had interest in that. This would not change the so-called
cooperating strategy in the future. Such reasoning is that of
the efficient breach of contract theory defended by the judge
Richard Posner [33], as part of the Law and Economics
doctrine, which is entirely grounded in therational choice
theory. Situations change, interests aligned yesterday are not
necessarily still aligned, and there should be no hard feel-
ings in breaking former engagements, at the cost of potential
penalties, would this allow the agent to reach a higher level
of utility.
Finally, the observation of social interactions reveals a
greater complexity for trust relationships than what these
theories could describe—also suggesting we have a limited
power over our relationship to trust. Some people have a
capacity to trust easily while others are more mistrustful,
some naturally inspire more trust than others, and these dis-
tinctions cannot be attributed to a variation in rationality. We
also tend to offer some people our trust on the basis of very
little knowledge, for reasons which do not even seem ration-
ally grounded, often in an involuntarily and even uncon-
scious way [6]. There are many examples of situations where
we give our trust, although it is not in our interest to do
so—e.g. when telling a friend a terrible secret they could use
against us with no apparent benefit in telling them.
3.2 The fundamental distinction betweentrust
To Simmel’s argument that we cannot reasonably trust
when we do not know anything, some have argued that we,
however, tend to trust a doctor we just met for non-trivial
decisions. Relying on a doctor’s prescription does not, nev-
ertheless, mean that we completely abandon ourselves to
their goodwill [31] and this is why the term ‘reliance’ is
often preferred, considered as a weak degree of trust [8]. I
reject the idea that such reliance would be of the same order
as trust, only differing in degree. The reliance is here based
on perfectly rational information (white blouse, people in
the waiting room, doctor listed on the official register, etc.),
so here again we are facing rational expectations made on
limited information. Just like I do not trust the barriers to
cross the railway safely when they are open, but only process
it as a signal which leads me to expect no train should arrive
immediately, I do not trust, but only expect, someone who
looks like a doctor in a place which looks like a medical
office to be one. In such situations of convenience, we do
comply with the paradigm of rational choice theory, and our
decision to abide by the doctor’s advices does not proceed
from trust, but from rational expectations. Doing so, we do
not so much rely on the doctor, rather than rely on our con-
ception of the world, just like I expect it to be more likely
to be assaulted by a gang member and to have a preppy-
looking young person bring me back my lost wallet, than the
contrary. Were the opposite to happen, I would certainly be
surprisedbecause my assumptions would have been proven
wrong, but not feel betrayed as no trust was involved.
Ultimately, Simmel is, however, right in saying that we
cannot ‘reasonably trust’, because trust is beyond reason, or
more precisely beyond this rationality. This is particularly
clear when considering his second argument, according to
which someone who knows everything does not need to
trust. In fact, not only is trust disconnected to the level of
information, but it often competes with it. Only someone
with all evidence of a crime against them would tell their
friends ‘I am innocent, you have to trust me’. People choos-
ing to trust their partner again, although these latter were
proven untrustworthy by cheating on them several times,
clearly does not reflect a rational behaviour, but a wish to
repair a relationship. This is because trust is not a matter
of reason and is even most spectacularly exhibited when
one puts someone else’s words above all other contradictory
information they may have, to make a decision against all
rational expectations.
Trust reflects an alternative mode of decision-making,
resulting from both a choice to put oneself in a situation of
voluntary vulnerability (as trust always comes at the cost
of the possibility of being betrayed) by putting someone’s
words above any other information, and a desire to build
a relationship with this person. This is precisely because
humans are only partially rational agents that they are capa-
ble of trust, which permits them to transcend rationality to
make decisions towards a greater goal, which is ultimately
social, not purely individual. Trust abides by a mode of deci-
sion-making which may seem irrational when considering
particular decisions such as short-term transactional rela-
tionships where the incentive to betray can be high and the
cost small. It however becomes perfectly rational, as soon
as trust is not considered anymore as only a means to an
end, but also as an end in itself, recognizing the building
of relationships as an objective. Let us refer to these two
rationalities as instrumental rationality and social rationality.
The former refers to the mode of reasoning of the rational
agent as previously defined, whose cognitive process is
entirely directed towards the making of decisions for the
AI and Ethics
1 3
purpose of maximising private interests. This is the mode
of reliance and rational expectations. The latter is a mode
of reasoning closer to a vertigo of reason. It plays a role in
decision-making, principally in competition with the outputs
of instrumental rationality, but its end is not to be exhausted
in an action theory or a theory of knowledge. While the
instrumental rationality gathers data to produce knowledge
(which can be an end itself for the individual) or to make
better decisions against others, the social rationality is to be
satisfied by the sole existence of trust relationships, id est
by the simple fact to be relating to others in a certain way,
conceiving social integration as an end itself. Whereas it is
certainly true that the decline of familiarity coming with
modernity led to the need to rethink our relationship with
trust, it however, did not change its principle. From this per-
spective, modernity may rather have brought a need for new
ways to develop meaningful relationships in an environment
of unfamiliarity, rather than to preserve oneself against risks.
Instrumental rationality is what allows humans to survive
in the Hobbesian state of nature. Social rationality is what
enables them to flourish in society.
Three conclusions then arise about trust. First, it is a
choice which necessarily implies putting oneself in a situ-
ation of voluntary vulnerability for the purpose of social
integration. While one can prove oneself trustworthy over
one’s past choices, this is only possible if they have also been
given the possibility to deceive us. Trust can then only be
won after it was given and accepting the possibility of being
betrayed is necessary to enable the possibility of develop-
ing trust relationships. Second, trust cannot be captured by
rational decision theory, and it is most strongly experienced
precisely when it dictates a behaviour opposed to the recom-
mendation of the decision-making process, based on rational
expectations. It does not follow that trust is irrational, but
rather that it abides by another type of rationality, as an alter-
native mode of reasoning dedicated to the building of strong
social relationships, even at the cost of truth, efficiency or
one’s own life. Third, given that trust derives from social
rationality and is necessarily associated to the possibility of
betrayal, implying intentionality, trust can only characterise
relationships between agents provided with a free will. This
is why we cannot be betrayed by false news or a broken
chair, but only by their personified source or manufacturer.
4 Deepfake promotes online trust instead
ofruining it
4.1 Deepfake isnotathreat fordemocracies
Deepfake is a computer vision technique, using deep learn-
ing methods to generate synthetic images for the purpose
of reproducing the features of a source image in a target
image. It was principally mediatised with the Face2Face
project [39] and the Synthesizing Obama project [38]. It
has since found applications in a wide range of domains
from internet memes to art and the cinema industry. How-
ever, the applications which have caught the most attention
were those related to political contents, such as the fake vid-
eos speeches of Boris Johnson and Jeremy Corbyn released
by the think tank Future Advocacy in the context of the
UK’s 2019 general elections.6 In 2020, the activist group
Extinction Rebellion released a fake video of the Belgian
Prime Minister, Sophie Wilmès, suggesting a possible link
between deforestation and Covid-19.7 These highly media-
tised examples, together with the rapid improvement of
Deepfakes’ performance, fed a great concern within the AI
ethics community: we may soon be incapable of distinguish-
ing machine-generated content from real content, leaving us
vulnerable to sophisticated disinformation campaigns for the
purpose of elections manipulation. By preventing us from
trusting anything online, Deepfake would thus bring disin-
formation techniques to their paroxysm and even pave the
way to a post-truth world, characterised by an unprecedented
relativism and a systematic distrust. Although this concern
a priori seems legitimate, I am now to show that it has no
solid ground. The argument can be broken down as such:
(1) Deepfake’s performance represents an unprecedented
potential for information manipulation
(2) The major issue deriving from Deepfake relates to dis-
information and election manipulation
(3) Used as such, Deepfake could then definitely ruin
online trust
With regards to the first part of the argument, we shall
indeed concede that Deepfake techniques are improving rap-
idly and that it will certainly soon be impossible for a human
being to discriminate between synthetic and non-synthetic
content without computer support. However, Deepfake is
neither the first, nor the most effective technique of infor-
mation manipulation. Ancient Greece’s rhetoricians were
already using a vast range of sophistic techniques to con-
vince or persuade an auditorium, and selling their art to
the wealthy Athenian youth, preparing it for the practice
of power in democracy. Since Plato, we tend to dissociate
truth from eloquence in political discourses, and, however,
rhetoric is still taught in political schools and shapes every
public allocation. Dupuy [15] for instance explains how the
argument of the reversal of the burden of proof is used to
reject the application of the Precautionary principle theory
in the innovation domain, while it is based on a petitio
6 https:// futur eadvo cacy. com/ deepf akes/.
7 https:// www. extin ction rebel lion. be/ en/ tell- the- truth.
AI and Ethics
1 3
principii. Considered as one of the founders of public rela-
tions, Edward Bernays [7] even considered ‘propaganda
as a necessity for political systems following universal suf-
frage, ‘to create order from chaos’ (1928, 141),8 and the
manipulative strategies he developed notably permitted him
to persuade American women to smoke, for the benefit of
the American Tobacco Company.
Claims may be subjective, discourses misleading and
communication campaigns deceiving, but facts are facts
one may say. However, facts are always captured within a
certain context and conveyed in a certain way, which sup-
ports a particular vision of the world. Besides outright lies,
common misleading techniques used on social media for dis-
information purpose include real photos, videos and quotes
either truncated, or shared outside their original context,
suggesting that the battle for accurate information rages in
the field of misleading suggestions, rather than that of fac-
tual accuracy. In another context, finance workers excel in
the art of presenting univocal data in different ways, high-
lighting some aggregate instead of others or changing the
scale of the graph to modify the shape of the curve, to sup-
port the story they aim to tell. Public administrations also
demonstrate a great ingenuity in this domain when commu-
nicating on the performance of their actions to reduce the
unemployment rate [35] or when soliciting polling institutes
to build a public opinion suitable for the political measure
they aim to enforce [11]. Finally, even a purely factual mes-
sage will certainly not produce the same impact, whether I
say ‘George died yesterday at 3pm’ or ‘Yesterday, a Black
American citizen was murdered by a White police officer’,
which prompted FriedrichNietzsche to claim that there are
no facts, only interpretations. It results from what precedes
that Deepfake should only be considered as one trick among
others within the large spectrum of manipulation techniques.
Some have come to consider it as an evolution rather than
a revolution in the history of manipulation techniques [43]
and I would add that deepfakes may be even easier to coun-
ter, as they relate to a question of factual accuracy (did X
really pronounce this discourse or not?), rather than vicious
misleading suggestions.
The second part of the argument states that Deepfake’s
greatest issue relates to disinformation and elections manip-
ulation. It explains why the main efforts to address its poten-
tial negative impacts have so far not been focused on the reg-
ulation of its uses, but on the detection of synthetic content.
This is illustrated by Facebook’s Deepfake Detection Chal-
lenge,9 Google’s open-sourced dataset of Deepfake videos
for training purposes10 or Microsoft’s Video Authenticator,11
all initiated for the explicit purpose of supporting the fight
against misinformation. Deeptrace’s 2019 report states that
of the 15,000 deepfakes found online and analysed by the
researchers, 96% of these were actually pornographic, prin-
cipally representing fake videos of celebrities used without
consent [2]. The application DeepNude is already lever-
aging Deepfake techniques to monetise the undressing of
women, offering on demand services to reconstruct a naked
body from a given picture. In addition, a second report from
Deeptrace (now Sensity) revealed in October 2020 the exist-
ence of a deepfake robot operating on Telegram, which has
been used to strip c.700,000 women, with over one hundred
thousand of them being publicly shared on the social media,
warning against the dangers of such robots being ‘weap-
onized for the purpose of public shaming or extortion-based
attacks’ ([1], 8). These two applications obviously raise
major concerns for privacy, personal image property and
reputation. In contrast with the fear of Deepfake’s potential
use for disinformation, they epitomise the reality of Deep-
fake’s uses today, opposing confirmed dangers and present
victims to hypothetical risks, and truly constitute a novel and
unique threat for people’s privacy.
The third component of the argument finally states
that, were Deepfake’s performance to become sufficiently
advanced to make the detection of synthetic contents
impossible and be used for the purpose of disseminating
false news, it could then deal a fatal blow to online trust.
As previously said, we never trust a piece of information,
but always the moral person responsible for it (I would say
here either an individual or a community of people), as the
content itself cannot be granted a proper intentionality. The
whole question then is that of the definition of online trust.
If what is meant by this is that believing the false news
spread against their political representatives, people would
end up losing faith in them and come to distrust political
institutions as a whole, I would reply that such a state of
general distrust already exists and is not to be attributed to
Deepfake. It is not because of Deepfake that a great num-
ber of U.S. citizens have little trust in their political rep-
resentatives, with 81% of them believing that members of
Congress behave unethically some or most of the time [32],
nor that three quarters of the French population considers
members of their political representatives corrupted [26].
Arthur Goldhammer and Pierre Rosanvallon [20] wrote a
whole book titled Counter-Democracy: Politics in an Age
of Distrust in 2008 to investigate the reasons for the general
8 My translation of “pour créer de l’ordre à partir du chaos”.
9 https:// ai. faceb ook. com/ blog/ deepf ake- detec tion- chall enge- resul ts-
an- open- initi ative- to- advan ce- ai/.
10 https:// ai. googl eblog. com/ 2019/ 09/ contr ibuti ng- data- to- deepf ake-
detec tion. html.
11 https:// blogs. micro soft. com/ on- the- issues/ 2020/ 09/ 01/ disin forma
tion- deepf akes- newsg uard- video- authe ntica tor/.
AI and Ethics
1 3
level of distrust ten years before the first deepfakes. Disinfor-
mational deepfakes could then indeed take advantage of the
generalised level of distrust which leaves people more vul-
nerable to anti-elite hoaxes, and transform what used to be
perceived as impossible, now as improbable. Yet, technology
should not serve as a scapegoat, taking the blame for a politi-
cal issue, which first and foremost calls for political change.
The confirmation bias regularly cited as a key vector of false
news spread would hereby not play such a significant role
if people were to consider their representative trustworthy.
4.2 The role ofDeepfake inpromoting trust
forthedigital age
On the other hand, if what is to be understood by ‘online
trust’ is rather a sort of general ‘reliance’ in cyberspace, as
an environment to collect accurate information and develop
authentic interactions, then Deepfake will certainly consti-
tute a challenge. Just like our biological senses, which allow
us to inhabit a world by collecting information about our
environment to make decisions that will define our interac-
tions with its different entities, we are using digital technolo-
gies more and more as a digital sense to inhabit cyberspace.
However, our biological senses can deceive us as famously
argued by RenéDescartes (1641) [14], and we should be
aware of their imperfectability as sources of knowledge
when facing an optical illusion or a mirage in the desert.
Likewise, not only can our digital sense deceive us, but there
also are ‘evil demons’ actively seeking to fool us in cyber-
space. It is thus of paramount importance for us, not only to
be aware of this, but also to process it and make an informed
use of this sense, just like someone can learn to live with
the contradictory messages of a phantom limb. To this end,
I believe Deepfake can be of great help, by training us to
not just passively believe the signals received by our sensi-
tive entries, but rather raising our critical mind and actively
searching for a trustworthy source of information.
A century ago, a photograph would have been considered
as irrefutable proof, while it does not prove anything today,
since photo editing software is available to anyone. From
dynamic pricing, persuasive design and nudging strategies,
manipulation techniques are already highly sophisticated,
raising most digital technologies to the status of capto-
logical interfaces. GPT-3 [9] can also produce convincing
human-style articles, which could be used for the purposes
of deception. Tomorrow, the internet of things will multiply
the number of connexion points with cyberspace present
in individuals’ ecosystems and the possibilities of virtual
reality and invasive technologies such as brain–computer
interfaces will doubtlessly make Deepfake seem like prehis-
toric techniques. For this reason, it is vital to train our brain
to overcome the passive credulity we have in videos, in the
same way that we familiarised it with photos, and to treat
entry points to cyberspace as bargaining spaces, considering
that even access to information has become an adversarial
game. Only in such a way will it be possible for us to free
ourselves from the drifts of Bernays’ ‘invisible government
([7], 31), and establish a true bilateral dialogue between the
people and their decision-makers: ‘public opinion becomes
aware of the methods used to model its opinions and behav-
iours. Better informed about its own functioning, it will
exhibit all the more receptivity to reasonable announcements
aligned with its own interests […] If it formulates its com-
mercial demands more intelligibly, companies will satisfy
its new exigencies’ ([7], 141).12
Instead of harming trust, Deepfake could on the contrary
promote it and help us prepare ourselves for the challenges
of the digital age. This calls for a communication effort to
inform the public opinion about the performance of such
deceiving techniques. It also requires acknowledging that
Deepfake cannot only be used to put someone’s worlds in
someone else’s mouth, but also offers an alibi for people to
deny the veracity of embarrassing words from an accurate
recording, or to fake someone’s identity in a meeting. Still,
ceasing to believe in everything does not result in distrust-
ing everyone, and this is why social relativism on truth does
not necessarily lead to nihilism on trust. Reducing our pas-
sive systematic benevolence towards all information coming
from cyberspace should also lead us to search more actively
for trustworthy sources and redesign the map of our trust
relationships around a network of key people. With the
condition of securing authentic identification together with
information traceability—for instance through blockchain
solutions to rapidly identify the original source of a piece
of information—we should observe the emergence of a new
kind of authority, personified by actors sharing well-verified
information on a regular basis. Both journalists and influ-
encers, these new actors will not only be considered as reli-
able based on the history of accurate information they have
shared in the past—and always at risk of losing this reli-
ance capital by the sharing of one single piece of false news
[3]—but also trusted because of the personal engagement
underlying their articles, exposing their individual reputation
to public shaming in case of failure perceived as betrayal.
The reputation cost was already discussed by Hobbes [23]
(1651, 239), who makes it an argument against the posture
of the ‘fool’, a pure homo oeconomicus with no considera-
tion for past conventions and making decisions based solely
on its immediate interests. This emphasises the distinction
12 My translation of ‘le grand public prend conscience des méthodes
utilisées pour modeler ses opinions et ses comportements. Mieux
informé de son propre fonctionnement, il se montrera d’autant plus
réceptif à des annonces raisonnables allant dans le sens de ses intérêts
[…] S’il formule plus intelligemment ses demandes commerciales,
les entreprises satisferont ses nouvelles exigences’.
AI and Ethics
1 3
between the conditions of social trust which are enabled by
the State as a reliable third party, allowing people to interact
safely, and trust itself, which relates to individuals’ reputa-
tion. Although such a posture is irrational for Hobbes—who
considers the risk too high and synonymous with social sui-
cide, as nobody would be disposed to contract with the fool
anymore—it was, however, still possible for someone with
a bad reputation to flee their city or country and start a new
life with the money made on the betrayal in Hobbes’ times.
The opportunities for such efficient-breach-of-contract-like
postures are less and less possible in the age of the ‘global
village’, when a stranger’s reputation can quickly be verified
on Google from virtually any country in the world.
This is how Deepfake, by challenging the passive recep-
tion of the signals received from our senses and confus-
ing our appreciation of the possible and the probable, may
increase the need for trust and thus prepare us to navigate
in the digital age while avoiding manipulative enterprises.
Left incapable of reasonably relying on received informa-
tion, such a constructive scepticism may then force us to
build a network of trustworthy relationships with personified
nodes, engaging their reputation to ensure the integrity of
the structure. In such a configuration, we would then ground
our judgment less and less on the processing of our increas-
ingly imperfect perception of the world by our also imperfect
instrumental rationality, and more and more on the social
rationality of trust, enabled by the feedback loop of public
shaming, and substituting social faith to increasingly impos-
sible rational expectations.
5 Conclusion
Questioning the widely accepted assumption that holds
Deepfake to be a threat to democracy, due to its potential
to back sophisticated disinformation campaigns and bury
the conditions of possibility for online trust, I exposed here
the reasons which prompt me to reject it in the absence of
solid ground, and to consider, on the contrary, that Deep-
fake could help promote trust and prepare us for the digital
age. I started by recalling the social justification of trust
in traditional societies, as a necessity enabling the positive
reciprocity of any social life, followed by the introduction
of a sacred third party to solve the social dilemma of the
gift. After the progressive institutionalisation of this third
party leading to the modern conception of the State, I intro-
duced the contemporary theories of trust based on rational
choice theory to make decisions insituations of involuntary
vulnerability associated with a lack of information. I then
rejected these on the ground that they are based on a con-
fusion between expectations and trust, are invalidated by
the reality of social interactions and fail to understand trust
not only as a means to an end, but also as an end in itself.
This led me to formulate a distinction between instrumental
rationality, based on perceived reliable information to for-
mulate rational expectations, and social rationality of trust
which goes beyond an action theory to associate an informa-
tion processing with a social end for self-realisation. I finally
countered the claim that Deepfake poses a unique threat to
democracies, arguing that it is only a manipulatory instru-
ment among others and likely not even the most efficient
one, that the real issues it raises relate to privacy, not misin-
formation, and that it ultimately does not challenge trust but
reliance in information perceived digitally. At a time when
digital perception simultaneously grows in importance and
uncertainty, as we are increasingly experiencing our world
through the mediation of cyberspace—which also is a com-
petition space between powerful actors with sophisticated
manipulative instruments for the shaping of reality—Deep-
fake can help us enhance our collective critical mind to
reduce our gullibility towards false news and promote source
verification. In the long run, it can also help us conduct the
deliberate choice of shifting from instrumental rationality
towards the social rationality of trust, considering faith in
people as a more viable way to ensure one’s self-realisation
within a coherent network of trusted individuals, than reli-
able information-based expectations.
1. Ajder, H., Patrini, G., Cavalli, F.: Automating image abuse: deep-
fake bots on telegram. Sensity (2020)
2. Ajder, H., Patrini, G., Cavalli, F., Cullen L.: The state of deep-
fakes: landscape, threats, and impact, deeptrace (2019)
3. Altay, S., Hacquin, A.-S., Mercier, H.: Why do so few people
share fake news? it hurts their reputation. (2019). https:// doi. org/
10. 1177/ 14614 44820 969893
4. Anspach, M.R.: A charge de revanche: figures élémentaires de la
réciprocité. Seuil, Paris (2002)
5. Augustine [ca. 397]: Confessions, Volume II: Books 9–13. Ed. &
trans. by Hammond C.J.-B., Harvard University Press, Cambridge
6. Baier, A.: Trust and anti-trust. Ethics 96(2), 231–260 (1986)
7. Bernays, E.: [1928] Propaganda. Zones, Paris (2007)
8. Blackburn, S.: ‘Trust, cooperation and human psychology. In:
Braithwaite, V., Levi, M. (eds.) Trust and Governance. Russel
Sage, New York (1998)
9. Brown, T.B., Mann, B., Ryder N., Subbiah, M., Kaplan, J., Dhari-
wal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agar-
wal, S., Herbert-Voss, A., Krueger, G, Henighan, T., Child, R.,
Ramesh, A., Ziegler, D.M., Wu, J., Winter, C., Hesse, C., Chen,
M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner,
C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D.: Lan-
guage models are few-shot learners’. Preprint at arXiv: 2005. 14165
10. Boneh, D., Grotto, A.J., McDaniel, P., Papernot, N.: Preparing for
the age of deepfakes and disinformation, Stanford HAI (2020)
11. Bourdieu, P.: L’opinion publique n’existe pas. Les temps modernes
318, 1292–1309 (1973)
12. Le Bouter, F.: ‘Formes et fonctions de la confiance dans la société
moderne. Implications philosophiques. http:// www. impli catio
AI and Ethics
1 3
ns- philo sophi ques. org/ actua lite/ une/ formes- et- fonct ions- de- la-
confi ance- dans- la- socie te- moder ne/ (2014). Accessed 21 Dec
13. Chesney, R., Citron, D.: Deepfakes and the new disinformation
war. The coming age of post-truth geopolitics. Foreign affairs.
https:// www. forei gnaff airs. com/ artic les/ world/ 2018- 12- 11/ deepf
akes- and- new- disin f orma tion- war (2019). Accessed 21 Dec 2020
14. Descartes, R.: [1641] Méditations métaphysiques. Flammarion,
Paris (2011)
15. Dupuy, J.-P.: Pour un catastrophisme éclairé. Seuil, Paris (2002)
16 Gambetta, D.: Trust. The Making and Breaking of Cooperative
Relations. Blackwell, Oxford (1988)
17. de Garine, I.: Foulina: possédés du pays Mousseye, documentary.
CNRS Images, Meudon, cop. 1966 (2005)
18. Giddens, A.: [1990] Les conséquences de la modernité, trans. By
Meyer O. L’Harmattan, Paris (1994)
19 Girard, R.: Mensonge romantique et vérité romanesque. Grasset,
Paris (1961)
20 Goldhammer, A., Rosanvallon, P.: Counter-Democracy: Politics
in an Age of Distrust. Cambridge University Press, Cambridge
21. Hardin, R.: Communautés et réseaux de confiance. In: Ogien, A.,
Quéré, L. (eds.) Les Moments de la confiance. Economica, Lon-
don (2006)
22. High-Level Expert Group on AI (HLEGAI): Ethics Guidelines
for Trustworthy Artificial Intelligence. European Commission,
Brussels (2019)
23. Hobbes, T.: [1651], Léviathan, ou la Matière, la Forme et la Puis-
sance d’un Etat ecclésiastique et civil, trans. by Mairet G. Gal-
limard, Paris (2000)
24. Kahneman, D., Slovic, P., Tversky, A.: Judgment Under Uncer-
tainty: Heuristics and Biases. Cambridge University Press, New
York (1982)
25. Lefort, C.: ‘L’échange et la lutte des hommes. Les Temps mod-
ernes, 6 (1961)
26. Lévy J.-D., Bartoli, P.-H., Hauser, M.: Les perceptions de la cor-
ruption en France. Harris interactive (2019)
27. Lévi-Strauss, C.: [1949] Les structures élémentaires de la parenté.
De Gruyter Mouton, Paris (2002)
28. Locke, J.: [1690] Traité du gouvernement civil, De sa véritable
origine, de son étendue et de sa fin, trans. by Mazel D. Calixte
Volland, Paris (1802)
29. Luhmann, N.: La confiance : Un mécanisme de réduction de
la complexité sociale, trans. by Bouchard S. Economica, Paris
30. Mauss, M.: Essai sur le don, forme et raison de l’échange dans
les sociétés archaïques. In: L’année sociologique, t. I, Mauss M.
(dir.), Paris, Librairie Félix Alcan (1925)
31. Michela, M.: Qu’est-ce que la confiance? Études 412(1), 53–63
32. Pew Research Center (PRC): Why americans don’t fully trust
many who hold positions of power and responsibility (2019)
33. Posner, R.: Economic Analysis of Law. Wolters Kluwer, Alphen
aan den Rijn (1973)
34. Rousseau, J.-J.: Du contrat social ou Principes du droit politique.
Marc Michel Rey, Amsterdam (1762)
35 Salmon, P.: Chômage. Le fiasco des politiques. Balland, Paris
36. Schwartz, O.; You thought fake news was bad? Deep fakes are
where truth goes to die. The Guardian. https:// www. thegu ardian.
com/ techn ology/ 2018/ nov/ 12/ deep- fakes- fake- news- truth (2018).
Accessed 21 Dec 2020
37. Simmel, G.: Sociologie. Etude sur les formes de la socialisation.
Presses Universitaires de France, Paris (1999)
38. Suwajanakorn, S., Seitz, S.M., Kemelmacher-Shlizerman, I.:
Synthesizing obama: learning lip sync from audio. ACM Trans.
Graph. 36(4), 95:1-95:13 (2017)
39. Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner,
M.: Face2Face: real-time face capture and reenactment of RGB
videos. In: 2016 IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR), pp. 2387–2395. Las Vegas, NV (2016)
40. Toews, R.: Deepfakes are going to wreak havoc on society. We
are not prepared. Forbes. https:// www. forbes. com/ sites/ robto ews/
2020/ 05/ 25/ deepf akes- are- going- to- wreak- havoc- on- socie ty- we-
are- not- prepa red/# 21b35 b1574 94 (2020). Accessed 21 Dec 2020
41. Turton, W., Martin, A.: How deepfakes make disinformation more
real than ever. Bloomberg. https:// www. bloom berg. com/ news/
artic les/ 2020- 01- 06/ how- deepf akes- make- disin forma tion- more-
real- than- ever- quick take (2020). Accessed 21 Dec 2020
42 Vaccari, C., Chadwick, A.: Deepfakes and disinformation:
exploring the impact of synthetic political video on deception,
uncertainty, and trust in news. Social Media + Society 6(1),
205630512090340 (2020)
43 Whyte, C.: Deepfake news: AI-enabled disinformation as a multi-
level public policy challenge. J Cyber Policy 5(2), 199–217 (2020)
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
... In a recent contribution, Etienne (2021) argues that deepfakes do not threaten but may rather increase online trust (I engage with this argument in footnote 20). While trust is crucial for democracies, Etienne does not connect his argument with democracy theory. ...
... I thus regard functioning deliberative processes and the establishment of widely acceptable truth claims as mutually constitutive. 20 Conversely, Etienne (2021) argues that deepfakes do not threaten but rather increase online trust. His argument is three-fold: Firstly, deepfakes are simply a new tool for manipulation and disinformation. ...
... 14 Due to its timely and "historic"(Bastian, 2022) nature, a later deepfake of the Ukrainian president in the context of the war in Ukraine (which surfaced during this paper's review process) was also included in the analysis. Furthermore, anonymous reviewers suggested the consideration of three contributions, namelyPaterson and Hanley (2020);Etienne (2021); and Ruiter (2021). ...
Full-text available
Observers fear that deepfakes will shake the very foundations of democracy. Notwithstanding, in-depth scholarly analyses of deepfakes’ political impact are rare, and do not consider theories of democracy. This contribution helps close this research gap, drawing on Warren’s problem-oriented democracy theory, as well as theories of deliberative democracy and contributions on the role of trust in democracies. I identify three core functions of democratic systems and their normative foundations, namely empowered inclusion, collective agenda and will formation (supported by deliberation), and collective decision-making. Based on a literature and media analysis, I systematize different types of deepfakes serving either disinformation or hate speech and outline how they weaken core democratic functions and norms: Deepfakes impede citizens’ empowered inclusion in debates and decisions that affect them, e.g. by hampering efforts to hold political representatives accountable or further marginalizing certain societal groups such as women or ethnic minorities. Deepfakes also undermine collective agenda and will formation by threatening the epistemic quality of deliberation as well as citizens’ mutual empathy and respect. This culminates in a decreased legitimacy of collective decisions taken, which is additionally threatened by pervasive (but mostly speculative) fears of deepfake election manipulation. My analysis has implications for (future) governance efforts addressing deepfakes. Such efforts are increasing, e.g. on the part of social media platforms, but also (supra-)national regulatory bodies.
... content as the most worrisome use of AI in terms of its potential applications for crime or terrorism (Caldwell et al., 2020). But, simultaneously, this emerging technology has the potential to bring forth major business opportunities for content creation and engagement (Etienne, 2021;Farish, 2020;Kietzmann et al., 2020). ...
... This, in turn, can erode consumers' purchasing intentions and impair the accuracy of helpful technologies such as recommendation systems. Additionally, given the rapid development of deepfake technologies that can generate human-like narratives using natural language processing (NLP) such as GPT-3 (a text-generation model), it is reasonable to expect that the integration of such technologies with deepfakes will only contribute to an increase in marketplace deception (Etienne, 2021;Kietzmann et al., 2020;Westerlund, 2019). Kietzmann et al. (2020) argue that deepfakes make it more difficult for people to respond to personalized advertisements. ...
Full-text available
Deepfakes—artificial but hyper-realistic video, audio, and images created by algorithms—are one of the latest technological developments in artificial intelligence. Amplified by the speed and scope of social media, they can quickly reach millions of people and result in a wide range of marketplace deceptions. However, extant understandings of deepfakes’ implications in the marketplace are limited and fragmented. Against this background, we develop insights into the significance of deepfakes for firms and consumers—the threats they pose, how to mitigate those threats, and the opportunities they present. Our findings indicate that the main risks to firms include damage to image, reputation, and trustworthiness and the rapid obsolescence of existing technologies. However, consumers may also suffer blackmail, bullying, defamation, harassment, identity theft, intimidation, and revenge porn. We then accumulate and present knowledge on the strategies and mechanisms to safeguard against deepfake-based marketplace deception. Furthermore, we uncover and report the various legitimate opportunities offered by this new technology. Finally, we present an agenda for future research in this emergent and highly critical area.
... In this ecosystem of people, health information, and media technologies, trust emerges as a key mediator of the relationship between perceived information quality and the usage of information (Kelton et al., 2008). Trust is a social mechanism via which humans deal with the perceived instability and unpredictability of a complex world (Luhmann, 1979) and becomes a need when there is insufficient information (Etienne, 2021). Both because of mis-and disinformation about COVID-19, as well as because of the emergent nature of our medical and scientific understanding of the disease everyday life, business, governance, and public discourse have become more complex since the pandemic began (Bratianu, 2020;Gubrium & Gubrium, 2021;Pereira et al., 2021;Uhl-Bien, 2021). ...
Full-text available
Understanding the factors that influence trust in public health information is critical for designing successful public health campaigns during pandemics such as COVID‐19. We present findings from a cross‐sectional survey of 454 US adults—243 older (65+) and 211 younger (18–64) adults—who responded to questionnaires on human values, trust in COVID‐19 information sources, attention to information quality, self‐efficacy, and factual knowledge about COVID‐19. Path analysis showed that trust in direct personal contacts (B = 0.071, p = .04) and attention to information quality (B = 0.251, p < .001) were positively related to self‐efficacy for coping with COVID‐19. The human value of self‐transcendence, which emphasizes valuing others as equals and being concerned with their welfare, had significant positive indirect effects on self‐efficacy in coping with COVID‐19 (mediated by attention to information quality; effect = 0.049, 95% CI 0.001–0.104) and factual knowledge about COVID‐19 (also mediated by attention to information quality; effect = 0.037, 95% CI 0.003–0.089). Our path model offers guidance for fine‐tuning strategies for effective public health messaging and serves as a basis for further research to better understand the societal impact of COVID‐19 and other public health crises.
... One emerging technology is 'Deepfakes,' an AI-based synthesis or alteration of audio and visual content. The generation of deepfakes has advanced significantly, and they could be used to propagate disinformation around the globe and may pose a severe threat, in the form of fake news, in the future [2], if they have not already. ...
Full-text available
Easy access to audio-visual content on social media, combined with the availability of modern tools such as Tensorflow or Keras, and open-source trained models, along with economical computing infrastructure, and the rapid evolution of deep-learning (DL) methods have heralded a new and frightening trend. Particularly, the advent of easily available and ready to use Generative Adversarial Networks (GANs), have made it possible to generate deepfakes media partially or completely fabricated with the intent to deceive to disseminate disinformation and revenge porn, to perpetrate financial frauds and other hoaxes, and to disrupt government functioning. Existing surveys have mainly focused on the detection of deepfake images and videos; this paper provides a comprehensive review and detailed analysis of existing tools and machine learning (ML) based approaches for deepfake generation, and the methodologies used to detect such manipulations in both audio and video. For each category of deepfake, we discuss information related to manipulation approaches, current public datasets, and key standards for the evaluation of the performance of deepfake detection techniques, along with their results. Additionally, we also discuss open challenges and enumerate future directions to guide researchers on issues which need to be considered in order to improve the domains of both deepfake generation and detection. This work is expected to assist readers in understanding how deepfakes are created and detected, along with their current limitations and where future research may lead.
... While it is principally used today to produce pornographic content (Ajder et al., 2019), and while the few political applications remain mostly as warning educative examples, things would be very different were this technique to be used more and more for the purpose of misinformation. Although this would not directly impact social trust, as argued elsewhere (Etienne, 2021b), the whole dynamic behind the adversarial game consisting in building solutions (notably based on autoencoders) to counter other solutions developed from the same techniques, i.e. that the algorithms created to detect misinformation are also used now to produce it, would be an edifying example of Illichian counterproductivity. ...
Conference Paper
Full-text available
The Covid-19 crisis exposed the world to multidimensional challenges, which not only include the management of the pandemic by healthcare authorities, but also the moral dilemmas faced by practitioners in medical structures, the preservation of trust in information undermined by the spread of false news, the upkeep of human interactions during lockdown periods, as well as the dangers of online hate rising with polarizing contexts. In such circumstances and when combined with an appropriate level of human judgement, machine learning applications were revealed to be crucial resources in supporting decision-making and implementation, resulting in more efficient and better-informed responses to said issues. This paper focusses on four social dimensions (bioethical, political, psychological and economic) from which the decisions taken in the context of the Covid-19 crisis derived major ethical implications. On the one hand, I argue against the possibility of addressing these issues from a purely algorithmic approach, elaborating on two types of limitations for automated systems to address ethical issues. This leads me to discuss how different ethical situations call for different performance metrics with regards to the 'contextual explicability and performance issue', as well as to enunciate a gold principle: 'legitimacy trumps accuracy'. On the other hand, I present practical examples of machine learning applications which enhance, instead of dilute, human moral agency in better addressing these issues. I also suggest a 'moral perimeter' framework to ensure the responsibility of algorithms-assisted decision-makers for critical decisions. The unique potential of AI to 'solve' moral dilemmas by intervening on their conditions of possibility prompts me to discuss a new type of moral situation: AI-generated meta-dilemmas.
Full-text available
Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.
Full-text available
In spite of the attractiveness of fake news stories, most people are reluctant to share them. Why? Four pre-registered experiments (N = 3,656) suggest that sharing fake news hurt one's reputation in a way that is difficult to fix, even for politically congruent fake news. The decrease in trust a source (media outlet or individual) suffers when sharing one fake news story against a background of real news is larger than the increase in trust a source enjoys when sharing one real news story against a background of fake news. A comparison with real-world media outlets showed that only sources sharing no fake news at all had similar trust ratings to mainstream media. Finally, we found that the majority of people declare they would have to be paid to share fake news, even when the news is politically congruent, and more so when their reputation is at stake.
Full-text available
Artificial Intelligence (AI) now enables the mass creation of what have become known as “deepfakes”: synthetic videos that closely resemble real videos. Integrating theories about the power of visual communication and the role played by uncertainty in undermining trust in public discourse, we explain the likely contribution of deepfakes to online disinformation. Administering novel experimental treatments to a large representative sample of the United Kingdom population allowed us to compare people’s evaluations of deepfakes. We find that people are more likely to feel uncertain than to be misled by deepfakes, but this resulting uncertainty, in turn, reduces trust in news on social media. We conclude that deepfakes may contribute toward generalized indeterminacy and cynicism, further intensifying recent challenges to online civic culture in democratic societies.
The advent of ‘DeepFake' content that is increasingly difficult for humans and machines to distinguish as artificial portends a number of challenges to democratic societies. In order to effectively respond, policymakers must gain understanding of how DeepFake content might manifest. This paper aims to offer necessary context by exploring AI-enabled multimedia disinformation across different levels: (1) as a mass-produced, regular feature of the information environment in democracies and (2) as a highly tailored instrument used in tandem with cyber operations. I explore the impact of DeepFakes on the ability of populations to determine the origination, credibility, quality and freedom of information. Such macro impacts amplify the potential value of DeepFake content employed alongside targeted cyber activities, a combination that even alone offers belligerent actors new opportunities for enhancing attempts at disinformation and coercion. Nevertheless, I ultimately argue that DeepFakes should be thought of more as an evolution than a revolution in disinformation techniques, the real threat of which emerges from the manner in which new abilities to produce even reasonable fidelity fabrications rapidly and at scale combine the multiform shape of the modern digital information environment to make organized influence efforts much more dynamic than has previously been the case.
Face2Face is an approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time. This live setup has also been shown at SIGGRAPH Emerging Technologies 2016, by Thies et al. where it won the Best in Show Award.
Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.
Conference Paper
We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reen-acted in real time.