Conference PaperPDF Available

The Emergence of Deepfakes and its Societal Implications: A Systematic Review

Authors:

Abstract and Figures

The appearance of Deepfake tools and technologies in the public is proliferating. Scholarly research is very centered on technology of deepfake but sparse in understanding how the emergence of deepfakes impacts society. In this systematic review, we explored deep-fake scholarly works that discuss societal implications than the technology-centered focus. We extracted studies from major publication databases-Scopus, Web of Science, IEEEX-plore, ACM Digital Library, Springer Digital Library and Google Scholar. The corpus reflects patterns based on their research method-ologies, area of focus, and the distribution of such research. Out of 787 works, 88 were highly relevant, with the majority of the studies being reviews of the literature. While research focus is generally drawn upon exploring security related harms, less focus is put on issues such as ethical implications and legal regularities for areas other than pornography, psychological safety, cybercrimes, terrorism, and more. The field research for Deepfake social impact research is emerging and this paper brings more insights drawn from a methodical, subject focused and distribution point of view.
Content may be subject to copyright.
The Emergence of Deepfakes and its Societal Implications:
A Systematic Review
Dilrukshi Gamage
Department of Innovation
Science,
Tokyo Institute
of Technology,
Tokyo, Japan
dilrukshi.gamage@acm.org
Jiayu Chen
Department of Psychology
and Human Developmental Sciences,
Nagoya University,
Nagoya, Japan
chen.jiayu@h.mbox.nagoya-u.ac.jp
Kazutoshi Sasahara
Department of Innovation
Science,
Tokyo Institute
of Technology,
Tokyo, Japan
sasahara.k.aa@m.titech.ac.jp
Abstract
The appearance of Deepfake tools and tech-
nologies in the public is proliferating. Schol-
arly research is very centered on technology
of deepfake but sparse in understanding how
the emergence of deepfakes impacts society.
In this systematic review, we explored deep-
fake scholarly works that discuss societal im-
plications than the technology-centered focus.
We extracted studies from major publication
databases - Scopus, Web of Science, IEEEX-
plore, ACM Digital Library, Springer Digital
Library and Google Scholar. The corpus re-
flects patterns based on their research method-
ologies, area of focus, and the distribution of
such research. Out of 787 works, 88 were
highly relevant, with the majority of the stud-
ies being reviews of the literature. While re-
search focus is generally drawn upon explor-
ing security related harms, less focus is put on
issues such as ethical implications and legal
regularities for areas other than pornography,
psychological safety, cybercrimes, terrorism,
and more. The field research for Deepfake so-
cial impact research is emerging and this paper
brings more insights drawn from a methodical,
subject focused and distribution point of view.
1 Introduction
The rapid development of technologies such as Ar-
tificial Intelligence (AI) and Deep Learning (DL)
revolutionized the way we create and consume
content. As a byproduct of this revolution, we
witness emerging technologies such as Deepfake
which may potentially harm and distress social
systems. Deepfakes are synthetic media gener-
ated using sophisticated algorithms which reflect
things that did not happen for real but computer
generated for manipulation purposes (Westerlund,
2019). In many cases, specific methods of Deep
Learning which involve training generative neu-
ral networks autoencoders, Generative Neural
Network (GNN) in Machine Learning (ML) are
utilized to generate these synthetic media.
Currently, a myriad of scholarly works con-
centrate on specific Deep Learning techniques
types of neural network model in which the model
is trained to restore (copy) the input data known as
auto encoders, GAN Models that involves a gener-
ator and discriminator in building an image closer
to the original, High-definition face image gener-
ations, Conditional GANs (CGAN) that generate
data while controlling attributes by giving attribute
information in addition to images during training,
face swapping techniques and speech synthesizing
techniques (Guarnera et al.,2020). These studies
are more influenced by the Deepfake generation
and detection methods. However, the advance-
ments of these scholarly works and the democra-
tization of these technologies made it easy for any
individual to generate realistic fake media con-
tent which could have been difficult previously.
Apart from the incident that incepted Deepfake
in 2017 where celebrity faces were used to cre-
ate phonographic videos using Deepfake technolo-
gies (Burkell and Gosse,2019), the incidences
such the British energy company scammed by
voice Deepfake technology (Stupp,2019) in 2019
and recently the arrest of a Japanese student for
posting pornographic videos that synthesized the
face of a celebrity using Deepfake technology by
training the model for about a week, using 30,000
images per video where the case is believed to be
the first criminal case in Japan which Deepfake
technology was abused (Times,2020) can be high-
light as emerged abuse of using Deepfakes. In ad-
dition to these, more recently(March 10th 2021),
a mother in Pennsylvania used Deepfake technol-
ogy to forge photos and videos to show drink-
ing, smoking and nakedness to trap a teammate
of a high school daughter who works as a cheer-
leader (Guardian,2021) and the article written in
Newyorker inquires ethical implications of Deep-
fake voice by narrating the movie about celebrity
chef Anthony Bourdain in July ‘15th, 2021 (Ron-
sner,2021). Together, all such incidences have
demonstrated the emerging threats unresting the
social process.
Although Deep Learning technologies are ver-
satile and could be useful in revolutionizing var-
ious industries, these incidents collectively raise
concerns about the societal problems emerging
from them. There is ample work in computer sci-
ence on automatic generation (Yadav and Salmani,
2019;Caldelli et al.,2021) and detection of Deep-
fakes (Maksutov et al.,2020;Rana and Sung,
2020), but to date there are only a handful of social
scientists who have examined the social impact of
Deepfake technology. In this paper, we conducted
a systematic literature review to understand the ex-
isting landscape of research that examines the pos-
sible effects Deepfakes might have on people, to
understand the psychological dynamics of deep-
fakes and to discover how it impacts society. In
particular, we hope to examine the following two
research questions:
Q1: What types of research conducted be-
tween 2017-2021 to understand the psycho-
logical and social dynamics and societal im-
plications of Deepfake?
Q2: What is the distribution of Deepfake re-
search between 2017-2021 that explores any
type of psychological dynamics and its soci-
etal implications?
The objective of this systematic study is to high-
light the types of research carried out to under-
stand the social dynamics of Deepfake and iden-
tify any gaps in the researches that need further
discussions on social implications and concerns
that arise from the technology. This exploration
of research related to social processes and the im-
plications of Deepfake will provide necessary pro-
jections, and point to scholarly work in this area
where social scientists could make a useful contri-
butions by understanding any lack of new direc-
tions. Since deepfake attributes in Deep Learning
and Machine Learning, much advancement and re-
search has occurred in the field of computer sci-
ence. In addition, with the democratization of ac-
cessible technology to a wider audience, necessary
attention is paramount in order to understand the
societal implications of this phenomenon.
Search Database Hits Selected
Springer Online Database 177 17
IEEE 154 11
ACM 264 8
Web of Science 137 41
Scopus 55 2
Other (Google Scholar) NA 9
Total 787 88
Table 1: Summary of the results retrieved by running
the search query and manually filtering by reviewing
according to the inclusion criteria.
2 Methods
We obtained articles for our systematic re-
view by searching popular scientific search
engines and repositories—Springer Digital
Database,IEEEXplore, ACM Digital Database,
Web of Science, and Scopus. Most systematic
reviews incorporate Preferred Reporting Items
for Systematic reviews and Meta-Analyses pro-
tocols (PRISMA) explained in details by Moher
et al. (2015).We followed a similar structure
to this literature review with particular interest
in understanding the two previously mentioned
research questions. We used the following search
query in all 5 databases and in addition to this,
used Google Scholar to search any other relevant
preprints or non-peer reviewed articles to bring
more inclusively to the research which may not
have been listed in ACM, Scopus, IEEE, Web of
Science or any other database.
{Deepfake OR Artificial Intelligence}
AND Misinformation
We did not restrict our search to only journal pa-
pers, but allowed any peer reviewed paper, or com-
mentary in an article, critical review or even work-
in-progress papers including the preprints. After
the search terms provided the dataset, we used two
experienced researchers to filter the research based
on an inclusion criteria, we were particularly care-
ful to select the results only if the manuscripts ex-
amined perceptions of Deepfake or its impact to
human interaction or discussed the social implica-
tions of Deepfakes. In other words, articles that
discussed a pure technology perspective (such as
GAN), or studies to find new techniques for Deep-
fake detection’s were eliminated as irrelevant to
this study. Figure 1describes the process con-
ducted to obtain the relevant data to the analysis.
Figure 1: Flow of the systematic review
2.1 Dataset
Our initial search query extracted 787 articles
from 5 databases. The extracted results were then
combined to a single data file and two researchers
collectively further filtered based on the inclusion
criteria depicted in Figure 1by manually review-
ing the abstracts. In addition to these filtered ar-
ticles, additional papers were added based on the
relevant research found by Google Scholar and we
labeled this source as “Other”. Although a Google
Scholar advance search returned 3420 hits, given
the depth and spread of the articles we focused
only on the first 20 pages which had 200 hits and
selected 9 highly relevant papers not included in
any databases.Out of these, 4 papers were from
journals and, 2 universities repositories which was
not listed in any of the 5 databases. Another 2
were preprints and currently under review, 1 com-
mentary from Nature. We found 79 highly relevant
papers from the 5 original databases and with the
Google Scholar results had 88 papers selected for
analysis.A breakdown is depicted in Table 1.
2.2 Measures
To answer RQ1, we analyzed all 88 papers us-
ing their full text, summarized the key phrases,
highlighted major findings in the respective papers
and identified any themes under which the article
could be categorized. Based on the summary and
key phrases, it was evident that the corpus can be
categorized by a common methodological stand-
point. For example, we realized that each article
can be categorized by whether it conducted an ex-
periment to understand social dynamics or had any
sort of methodical analysis to understand social
impact or if it was produced as a result of an exten-
sive critical review by positioning any premises or
even if it provided a conceptual proposal or frame-
work beyond the review of the Deepfake social
phenomenon. At the same time, we also examined
whether or not the corpus focused on several do-
main areas addressing Deepfake social issues. We
incorporated word clouds on each abstract to sup-
port subjective judgment on categories and focus
areas.
To answer the RQ2, on the distribution of re-
search in Deepfake psychological dynamics and
its societal implications, we described descriptive
statistics with a network analysis that understands
the connections with its type of research and em-
phasis. At the same time, to highlight the empha-
sis of the paper, we highlighted the generated word
clouds, specifically depict the categorical flows
based on the frequencies, and used the network di-
agram using Gephi software to illustrate the author
distributions among the selected papers.
3 Results and Discussion
Overall,the majority of the results from the query
resulted scholarly work related to Deep learning,
AI and ML learning technologies, and its improve-
ments in creating or detecting Deepfake. Only 88
out of 787 were selected as those research works
were found to be discussing the psychological dy-
namics, social implications, harms to the society,
ethical standpoint, and or solutions from the a
social-technological point of view.
3.1 RQ1: Types of research
Examining the abstracts and full text of the ar-
ticles, we identified that each article could be
categorized based on 11 types of research—
Systematic review, Review based on Literature,
Philosophical mode of enquiry, Examines, Ex-
periment, Network Analysis, Content Analysis,
Design, Conceptual Proposal, Commentary and
Analysis by Examples. Although these categories
are based on the subjective judgment of the au-
thors, it provides a solid understanding to the con-
ducted research based on its main objectives and
methods.
A magnified view of this dataset (88) revealed
that the majority (30) of the papers focused on crit-
ical reviews based on the previous literature and
slightly above half of the papers (21) conducted
active experiments using real users to explore the
social and psychological dynamics of perceiving
Deepfakes or understanding their impact. Only
one study was performed a network analysis based
on Deepfake discourse and limited other research
papers focused on rest of the methods as depicted
in Figure 2. Apart from the methodology point of
view, we also derived key categories of the papers
based on its focus area. Although our key inter-
est centered upon Deepfake and its social impact,
we observed that these relevant research covered
a wider range of focus areas in different subject
domains. These areas ranged from security as-
pects, pornography, legal concerns, Deepfake me-
dia, specifically video and images, psychological
perspectives, political perspective, human cogni-
tion perspectives, and more. Therefore, to specifi-
cally answer RQ1, we describe the details of these
methodologies and focus areas in the following
sections.
Methodology used in Deepfake social
implication research
Although methodical approaches for research are
not new, our analysis of the 88 highly relevant pa-
pers for the social or psychological implications
of Deepfake reflected that most of the research
in this domain is still developing and many re-
searchers are critically evaluating and analyzing
Deepfake phenomena from the previous literature,
discussing potential future outcomes. We catego-
rized this type of research as Review based on
Literature and from our corpus, the earliest re-
search on critical reviews of Deepfake social im-
plications occurred in 2019 (although the term
“Deepfake“ first time in 2017 (Westerlund,2019)).
Research by Westling (2019) raise questions about
to understanding whether the Deepfake phenom-
ena is shallow or deep and how society might react
to these technologies. Specifically the paper crit-
ically analysed and predominantly provided nu-
ances to the technology that generates deep fake
media and its uses, showing that society has never
relied solely on the content as a source of truth.
Similarly Antinori (2019) provides an extensive
narration to Deepfake and relates its consequences
to terrorism. The author does not follow a sys-
temic approach, however there is a critical discus-
sion of the Deepfake focus on the near future of se-
curity threats by using examples of previous liter-
ature and emphasizing the need of awareness, law
enforcement, and policymakers to implement ef-
fective counter terrorism’s strategies. While pro-
viding this background and previous work, the
author also articulates his stance on the subject
emphasizing that as a globalized community, we
are transitioning from e-terrorism to upcoming on-
line terrorism, as well as the linearity to hyper-
complexity by malicious use of AI and living in
the post-truth era of a social system. Since his
research article not only provides critical review
based on past literature but also the authors the-
oretical and qualitative research experience with
participation and working as a counter terrorism
expert in related projects, we also intersected this
with a new category: Examines. Through our full-
text analysis, we observed that many other Re-
view based on Literature scholarly work inter-
sects with the Examines category. In these types
of articles, we observed authors critically provid-
ing their experience or using their point of view
as a metaphor to build constructs. All together
we found 11 out of 30 papers categorized as Re-
view based on Literature illustrated this intersec-
tion. For example, the review article by Han-
cock and Bailenson (2021), attempts to understand
the possible effects Deepfakes might have on peo-
ple, and how psychological and media theories
apply. In addition, the article by ¨
Ohman (2019)
brings a philosophical mode of enquiry to a per-
vert’s dilemma, an abstraction about fantasizing
sexual pornography and argues that ethical per-
spectives underline dilemmas by using the liter-
ature and theories. Similar placement of argu-
ments and concepts supported by review of liter-
ature can be found in articles by Taylor (2021),
Kerner and Risse (2021), Langa (2021), Rat-
ner (2021), Harper et al. (2021), Langguth et al.
(2021) and (Greenstein,2021). However, we also
derived 4 research articles that falls in the cat-
egory of Examines without a dominating criti-
cal literature review—For example, an article was
compiled while examining US and British legisla-
tion indicating legislative gaps and inefficiency in
the existing legal solutions and presenting a range
of proposals of legislative change to the constitu-
tional gaps in porn (Mania,2020). The article
examines current online propaganda tools in the
context of the different information environment
and, provides examples of its use, while seek-
ing to educate about Deepfake tools and the fu-
ture of propaganda (Pavl´
ıkov´
a et al.,2021). An-
other study examines the problem of unreliable
information on the internet and its implications
for the integrity of elections, and representative
democracy, in the U.S. (Zachary,2020) and an-
other study that addresses the economic factors
Figure 2: Scholarly work distribution based on the
year it was published, the Published databases and its
Methodology
that make confrontational conversation more or
less likely in our era and brought viewpoints in the
Deepfakes which becoming more widespread on
the dark web (Greenstein,2021) are falling in to
this Examines category.
However, alongside review based articles and
articles that conducted extensive examinination,
we also derived another category. Although this
category is similar to the methods we previosuly
stated, it is distinguished by the way it positions
its point of views. We noticed that this type of
articles is extensively based on use cases, exam-
ples of incidences or more descriptions of theoret-
ical and informational AI and Deepfake technolo-
gies. We name this category Analysis by Exam-
ple and found 5 papers fall under its umbrella. Ar-
ticles in this category includes Pantserev (2020),
through their examples of Deepfakes in the mod-
ern world, and the internet-services, Amelin and
Channov (2020) study the use of legal regulation
in use of facial processing technologies, and Cald-
well et al. (2020) study possible applications of
artificial intelligence and related technologies in
the perpetration of crimes, Degtereva et al. (2020)
studied the general analysis of risks and hazards
of the technologies and analysis examples of le-
gal remedies available to victims. We also iden-
tified a category named Philosophical Mode of
Enquiry which includes papers that use a philo-
sophical point of view in premising their enquiry
to the social issues found with in the Deepfake ap-
plications ( ¨
Ohman,2019;Ziegler,2021;Floridi,
2018;Hazan,2020;Kwok and Koh,2021).
However, since the developments in the area of
social implications of Deepfakes are yet growing,
we observed only 2 Systematic Review types of
research that explain in detail of the growing body
of literature and its systematic analysis (Godulla
et al.,2021;Westerlund,2019). The first sys-
tematic review used English-language deepfake
research to identify salient discussions; and the
other used 84 publicly available online news ar-
ticles to examine what deepfakes are and who pro-
duces them, and the benefits and threats of deep-
fake technology in 2021 and 2019 respectively.
However, apart from these critical reviews, ex-
aminer papers, analysis by examples and system-
atic reviews, we found one other methods that
could be classifed into the same theme but distinct
in its narration of the information as it is made
as a personal opinion or commentary to certain
events. We named this category as Commentary
Bases which often provides short narrative for the
question of the future of technological implica-
tions (Kalpokas,2021;LaGrandeur,2021;Beridze
and Butcher,2019;Strickland,2018,2019).
As a next category of methodology, we ob-
served that 21 out of 88 papers depicted some sort
of experiment using human subjects to understand
any impact and social implications of Deepfake
and we named this category Experiment. In this
category we observed researchers such as Khod-
abakhsh et al. (2019) used 30 users to examine
human judgment on Deepfake videos, Caraman-
cion (2021) used 161 users to explore the relation-
ship between a person’s demographic data, polit-
ical ideology and the risk of him/her falling prey
to Mis/Disinformation attacks. The largest study
conducted by Yaqub et al. (2020) used 1,512 users
to explore the impact of four types of credibility
indicators on people’s intent to share news head-
lines with their friends on social media. Similarly,
Dobber et al. (2021) studied effects on political at-
titudes using 271 users, K¨
obis et al. (2021) studied
the inability of people to reliably detect Deepfakes
using 210 users. Their research particularly found
neither by educating or introducing financial in-
centives improves their detection accuracy exper-
imented and many other similar studies contained
in this category. Apart from experiments, we also
found research articles proposing frameworks or
solutions to Deepfake societal issues by conceptu-
alizing theoretical frameworks (Cakir and Kasap,
2020;Kietzmann et al.,2020b,a) named as Con-
ceptual Proposals. Beyond conceptual propos-
als, we also found that some articles consisted
clear design goals with implementation plans or
some some artifacts designed as solutions to the
issues of Deepfake societal issues (Chi et al.,
2020;Qayyum et al.,2019;Chen et al.,2018;
Sohrawardi et al.,2019;Inie et al.,2020). Thus
we introduced a category named Design.
Apart from such dominated methods to observe
social implications and perceptions of Deepfakes,
we also found 7 articles that followed the Con-
tent Analysis method. Three used Twitter data
as their corpus (Maddocks,2020;Oehmichen
et al.,2019;Hinders and Kirn,2020) and two
studies analyzed the article content in news media
(Brooks,2021;Gosse and Burkell,2020); each
study conducted analyses using YouTube com-
ment discourses about Deepfakes (Lee et al.,
2021) and journalist discourse (Wahl-Jorgensen
and Carlson,2021) to understand the social impli-
cations of the Deepfakes phenomenon. Althogh,
similar to these studies, we categorized one more
study as Network Analysis and it conducted se-
mantic content analysis using Twitter data relating
to Deepfake phenomena (Dasilva et al.,2021) to
understand the social discourse.
Range of focus areas examining Deepfake and
its social implications
Apart from the key categorization towards re-
search methods, we examined the significant re-
search questions these research methods are used
to solve. This aids us in categorizing the Deepfake
social research based on the subject areas which
it is focused. We derived 30 main focus areas
these research articles primarily concentrate on,
followed by 44 sub-focused areas. This flow is
graphically represented in the alluvial diagram in
Figure 3. At the interest of space for this paper, we
highlight the top 5 focus areas of research.
As it appears, the highest interest of focus is
drawn upon Security related issues relating to the
social implications of Deepfakes. A significant
number of research relating to security are fore-
seeing harms and threats to the society through
“Review of literature“ (Repez and Popescu,2020;
Taylor,2021;Kaloudi and Li,2020;Rickli and
Ienca,2021). More security focus research is con-
ducted based on a “Design” of a blockchain-based
Figure 3: All 88 papers are categorised based on its
main methodological and focus are of the research.
Highlighted in color are the first five focus areas based
in the higher frequency - Security, Synthetic Media,
Psychology,Legal Regulation and Political are the top
5 focus areas.
framework for preventing fake news while intro-
ducing various design issues (Chi et al.,2020). At
the same time security focus research has been vis-
ible in the research method of Analysis by Exam-
ple” where Degtereva et al. (2020) conduct a gen-
eral analysis to understand the risks and hazards
of the technologies used today and highlight the
need for a wider application and enhancement of
Deepfake technology to fight Cyber Crimes. Sim-
ilarly, Pantserev (2020) analyses a wide range
of examples of deepfakes in the modern world
and the Internet-services that generate them with a
key focus on security. Their research also depicts
a clear sub focused area of Psychological Secu-
rity as they try to understand the threats Deepfake
cause to society and its impacts.
The next highest focus area of literature solves
problems relating to “Synthetic Media.” These
are mostly considered as the Deepfake in the the
mode of Videos. We observed that most re-
searchers have used Synthetic media to conduct
“Experiments” and “Content Analysis. For in-
stant, Iacobucci et al. (2021) test whether a sim-
ple priming of deepfake information significantly
increases users’ ability to recognize Synthetic me-
dia, Hwang et al. (2021) examined the negative
impact of deepfake video and the protective ef-
fect of media literacy education; and Murphy and
Flynn (2021) examined how Deepfake videos may
distort the memory for public events, yet found
it may not always be more effective than sim-
ple misleading text. Other than these, Brooks
(2021) used “Content Analysis” to analyze pop-
ular news and magazine to understand impact of
Synthetic media. Interestingly, the article argues,
that if fake videos are framed as a technical prob-
lem, solutions will likely involve new systems and
tools or if fake videos are framed as a social, cul-
tural, or as an ethical problem, solutions needed
will be legal or behavioral ones. On the other
hand, in this article, the focus of Synthetic media
also expand to the sub focus to examine the soci-
etal Harm/Threats. Similarly, Hinders and Kirn
(2020), empathize that digital photos are so easy
to manipulate, yet deepfake videos are more im-
portant to understand as deepfake synthetic media
(video evidence) could be deliberately misleading
and not easy to recognize as fake. Apart from
content analysis, focus on synthetic media nar-
rowed the focus for a few commentary based ar-
ticles: one examines Deepfake video implications
on Facebook (Strickland,2019), and two other
articles focus examining Deepfake videos chal-
lenges with a sub focus on understanding Future
Challenges (Kalpokas,2021;LaGrandeur,2021).
The next highest set of research articles focus
mainly on the areas of Psychological, Legal Reg-
ulation, and Politics. Interestingly, all Psycho-
logical focus research conducted as experiments
except for one that focuses on the Psychological
impact of Deepfake through a review of litera-
ture (Hancock and Bailenson,2021). In experi-
ments, Yaqub et al. (2020) explore the effect of
credibility signals and how they perceived any in-
dividual to share fake news Khodabakhsh et al.
(2019) focus on understanding the vulnerability of
Human judgement to Deepfake. Ahmed (2021b)
examines the social impact of Deepfakes using an
online survey sample in the United States. This
investigates psychological aspects of the impact
of Deepfake while examining the concerns of cit-
izens regarding deepfakes, exposure to deepfakes,
inadvertent sharing of deepfakes, the cognitive
ability of individuals, and social media news skep-
ticism. Cochran and Napshin (2021) provided
psychological aspects of Deepfakes by exploring
factors impacting the perceived responsibility of
online platforms to regulate deepfakes and pro-
Figure 4: Word clouds from abstracts identified as fo-
cusing Pornography (top) and in all articles (bottom)
vide implications for users of social media, so-
cial media platforms, technology developers, and
broader society. The research focusing on Le-
gal Regulation extensively worked on Deepfake
pornography, discussing its ethical perspective,
consequences, and legal framework to take ac-
tion (i,e ‘(Karasavva and Noorbhai,2021;Delfino,
2020;Gieseke,2020). Few others had sub-focus
on discussing the threats and harms (O’Donnell,
2021), Terrorism (Antinori,2019) and specific to
facial processing technologies (Amelin and Chan-
nov,2020). The Political focus researches have
been extensively worked on election related conse-
quences of Deepfakes and few focused on the jour-
nalists discourse to shape political context (Wahl-
Jorgensen and Carlson,2021), explored the rela-
tionship between political and pornographic deep
fakes (Maddocks,2020) and discussed the threat
of Deepfake online propaganda tools (Pavl´
ıkov´
a
et al.,2021).
3.2 RQ2: Distribution of the research
In the previous sections, we partially stated the
distributions of research methods and focus areas
by utilizing Figure 2and 3. Further, we expanded
the knowledge of the landscape for Deepfake re-
search that concentrates on its societal impacts by
examining the yearly distribution of the relevant
research. As depicted in Figure 2, the yearly pro-
jection reflects a trend for studies which explore
the social implications by Deepfake are emerg-
Figure 5: [Left] A bipartite graph created using source as the authors and targets as the papers. [Right] The
Bipartite graph filtered based on the degree centrality larger than 2.
ing since 2019 and 2021 has the highest number
of such researches(42) even before the year 2021
ends.
We generated word clouds for each abstract and
one common word cloud combining all 88 ab-
stracts to make sense of what we examined and to
summarize the analysis of the full text of the arti-
cles. The top word cloud in the Figure 4generated
from a abstracts which we categorized as Pornog-
raphy (Gieseke,2020) and it hows its words are
cantered on pornography; The bottom shows the
word cloud from all abstracts which reflects Deep-
fake as the central theme and yet highlights, other
focus areas we identified that greatly resonated in
our categorizations. Finally, to better understand
the distribution of the authors of these papers, we
generated bipartite networks using the author list
with the titles of the papers they have written (Fig-
ure 5). Nodes represent the authors (pink), papers
(green), and the edges point from the authors to
the papers. It appears that researchers who ex-
plore Deepfake social implications are almost not
connected to each other as the clustering coeffi-
cient indicates 0.0 and nearly 30% of Papers writ-
ten by 70% of authors and the highest number of
relationship consisted one degree as a single au-
thor has written the papers. Ranked by the degree
centrality (how many authors written how many
papers), the graph revealed the lowest degree cen-
trality as 1 and the highest as 8. Filtering the net-
work to reflect if there are any 2 or more authors
collaborated in writing these social research types
we filtered the graph into 2 to 8 degree centrality.
Interestingly, this resulted only two authors had 2
degrees relationship. in one instance, the same au-
thor wrote two different papers while collaborat-
ing with multiple other authors (Kietzmann et al.,
2020b,b); in the other instance the same author has
written two papers without any author collabora-
tions (Ahmed,2021c,a).
4 Conclusions
Our study reflects a comprehensive review of
Deepfake research which discusses the social im-
plications of Deepfake as the primary focus op-
posed to the reviews to the technology itself. We
selected 88 highly relevant papers to our study
and based on the methodical aspects, we found 11
types of studies that could be categorized. Out of
all 88 papers, we also found that majority of stud-
ies focus on research relating to security and dis-
cuss the possible harms and threats to the social
echo system. Much debated issues such as ethi-
cal implications to Deepfake, the regulatory or le-
gal solutions other than pornography, such as mak-
ing awareness or educative activism to other type
of harm specially, the cyber crimes and terrorism
are much sparse in the landscape. Our results sug-
gest that the social science of Deepfakes is emerg-
ing, but such research has been conducted inde-
pendently thus far. Given that Deepfakes and re-
lated AI technologies are weaponizing, the social
implications of Deepfakes should be more investi-
gated with an interdisciplinary effort.
Acknowledgments
This work is generously supported by JST, CREST
Grant Number JPMJCR20D3, Japan.
References
Saifuddin Ahmed. 2021a. Fooled by the fakes: Cog-
nitive differences in perceived claim accuracy and
sharing intention of non-political deepfakes. Per-
sonality and Individual Differences, 182:111074.
Saifuddin Ahmed. 2021b. Navigating the maze:
Deepfakes, cognitive ability, and social media
news skepticism. new media & society, page
14614448211019198.
Saifuddin Ahmed. 2021c. Who inadvertently shares
deepfakes? analyzing the role of political interest,
cognitive ability, and social network size. Telemat-
ics and Informatics, 57:101508.
Roman Amelin and Sergey Channov. 2020. On the le-
gal issues of face processing technologies. In Inter-
national Conference on Digital Transformation and
Global Society, pages 223–236. Springer.
Arije Antinori. 2019. Terrorism and deepfake: From
hybrid warfare to post-truth warfare in a hybrid
world. In ECIAIR 2019 European Conference on
the Impact of Artificial Intelligence and Robotics,
page 23. Academic Conferences and publishing lim-
ited.
Irakli Beridze and James Butcher. 2019. When seeing
is no longer believing. Nature Machine Intelligence,
1(8):332–334.
Catherine Francis Brooks. 2021. Popular discourse
around deepfakes and the interdisciplinary challenge
of fake video distribution. Cyberpsychology, Behav-
ior, and Social Networking, 24(3):159–163.
Jacquelyn Burkell and Chandell Gosse. 2019. Nothing
new here: Emphasizing the social and cultural con-
text of deepfakes. First Monday.
Duygu Cakir and ¨
Ozge Y¨
ucel Kasap. 2020. Audio to
video: Generating a talking fake agent. In Inter-
national Online Conference on Intelligent Decision
Science, pages 212–227. Springer.
Roberto Caldelli, Leonardo Galteri, Irene Amerini, and
Alberto Del Bimbo. 2021. Optical flow based cnn
for detection of unlearnt deepfake manipulations.
Pattern Recognition Letters, 146:31–37.
M Caldwell, JTA Andrews, T Tanay, and LD Grif-
fin. 2020. Ai-enabled future crime. Crime Science,
9(1):1–13.
Kevin Matthe Caramancion. 2021. The demographic
profile most at risk of being disinformed. In 2021
IEEE International IOT, Electronics and Mechatron-
ics Conference (IEMTRONICS), pages 1–7. IEEE.
Weiling Chen, Chenyan Yang, Gibson Cheng, Yan
Zhang, Chai Kiat Yeo, Chiew Tong Lau, and
Bu Sung Lee. 2018. Exploiting behavioral dif-
ferences to detect fake ne. In 2018 9th IEEE
Annual Ubiquitous Computing, Electronics & Mo-
bile Communication Conference (UEMCON), pages
879–884. IEEE.
Hongmei Chi, Udochi Maduakor, Richard Alo, and
Eleason Williams. 2020. Integrating deepfake de-
tection into cybersecurity curriculum. In Proceed-
ings of the Future Technologies Conference, pages
588–598. Springer.
Justin D Cochran and Stuart A Napshin. 2021. Deep-
fakes: awareness, concerns, and platform account-
ability. Cyberpsychology, Behavior, and Social Net-
working, 24(3):164–172.
Jes´
us P´
erez Dasilva, Koldobika Meso Ayerdi,
Terese Mendiguren Galdospin, et al. 2021. Deep-
fakes on twitter: Which actors control their spread?
Media and Communication, 9(1):301–312.
Viktoria Degtereva, Svetlana Gladkova, Olga
Makarova, and Eduard Melkostupov. 2020.
Forming a mechanism for preventing the viola-
tions in cyberspace at the time of digitalization:
Common cyber threats and ways to escape them.
In Proceedings of the International Scientific
Conference-Digital Transformation on Manufactur-
ing, Infrastructure and Service, pages 1–6.
Rebecca A Delfino. 2020. Pornographic deepfakes:
The case for federal criminalization of revenge
porn’s next tragic act. Actual Probs. Econ. & L.,
page 105.
Tom Dobber, Nadia Metoui, Damian Trilling, Natali
Helberger, and Claes de Vreese. 2021. Do (micro-
targeted) deepfakes have real effects on political at-
titudes? The International Journal of Press/Politics,
26(1):69–91.
Luciano Floridi. 2018. Artificial intelligence, deep-
fakes and a future of ectypes. Philosophy & Tech-
nology, 31(3):317–321.
Anne Pechenik Gieseke. 2020. the new weapon of
choice”: Law’s current inability to properly address
deepfake pornography. Vand. L. Rev., 73:1479.
Alexander Godulla, Christian P Hoffmann, and Daniel
Seibert. 2021. Dealing with deepfakes–an interdis-
ciplinary examination of the state of research and
implications for communication studies. SCM Stud-
ies in Communication and Media, 10(1):72–96.
Chandell Gosse and Jacquelyn Burkell. 2020. Politics
and porn: how news media characterizes problems
presented by deepfakes. Critical Studies in Media
Communication, 37(5):497–511.
Shane Greenstein. 2021. The economics of confronta-
tional conversation. IEEE Micro, 41(2):86–88.
The Guardian. 2021. Mother charged with deepfake
plot against daughter’s cheerleading rivals.
Luca Guarnera, Oliver Giudice, and Sebastiano Bat-
tiato. 2020. Deepfake detection by analyzing con-
volutional traces. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recog-
nition Workshops, pages 666–667.
Jeffrey T Hancock and Jeremy N Bailenson. 2021. The
social impact of deepfakes. Cyberpsychology, be-
havior and social networking, 24(3):149–152.
Craig A Harper, Dean Fido, and Dominic Petronzi.
2021. Delineating non-consensual sexual image of-
fending: Towards an empirical approach. Aggres-
sion and violent behavior, page 101547.
Susan Hazan. 2020. Deep fake and cultural truth-
custodians of cultural heritage in the age of a
digital reproduction. In International Confer-
ence on Human-Computer Interaction, pages 65–80.
Springer.
Mark K Hinders and Spencer L Kirn. 2020. Cranks
and charlatans and deepfakes. In Intelligent Feature
Selection for Machine Learning Using the Dynamic
Wavelet Fingerprint, pages 297–346. Springer.
Yoori Hwang, Ji Youn Ryu, and Se-Hoon Jeong. 2021.
Effects of disinformation using deepfake: The pro-
tective effect of media literacy education. Cy-
berpsychology, Behavior, and Social Networking,
24(3):188–193.
Serena Iacobucci, Roberta De Cicco, Francesca
Michetti, Riccardo Palumbo, and Stefano Pagliaro.
2021. Deepfakes unmasked: The effects of infor-
mation priming and bullshit receptivity on deepfake
recognition and sharing intention. Cyberpsychology,
Behavior, and Social Networking, 24(3):194–202.
Nanna Inie, Jeanette Falk Olesen, and Leon Derczyn-
ski. 2020. The rumour mill: Making the spread of
misinformation explicit and tangible. In Extended
Abstracts of the 2020 CHI Conference on Human
Factors in Computing Systems, pages 1–4.
Nektaria Kaloudi and Jingyue Li. 2020. The ai-based
cyber threat landscape: A survey. ACM Computing
Surveys (CSUR), 53(1):1–34.
Ignas Kalpokas. 2021. Problematising reality: the
promises and perils of synthetic media. SN Social
Sciences, 1(1):1–11.
Vasileia Karasavva and Aalia Noorbhai. 2021. The real
threat of deepfake pornography: a review of cana-
dian policy. Cyberpsychology, Behavior, and Social
Networking, 24(3):203–209.
Catherine Kerner and Mathias Risse. 2021. Beyond
porn and discreditation: Epistemic promises and
perils of deepfake technology in digital lifeworlds.
Moral Philosophy and Politics, 8(1):81–108.
Ali Khodabakhsh, Raghavendra Ramachandra, and
Christoph Busch. 2019. Subjective evaluation of
media consumer vulnerability to fake audiovisual
content. In 2019 Eleventh International Conference
on Quality of Multimedia Experience (QoMEX),
pages 1–6. IEEE.
Jan Kietzmann, Linda W Lee, Ian P McCarthy, and
Tim C Kietzmann. 2020a. Deepfakes: Trick or
treat? Business Horizons, 63(2):135–146.
Jan Kietzmann, Adam J Mills, and Kirk Plangger.
2020b. Deepfakes: perspectives on the future “real-
ity” of advertising and branding. International Jour-
nal of Advertising, pages 1–13.
Nils K¨
obis, Barbora Doleˇ
zalov´
a, and Ivan Soraperra.
2021. Fooled twice–people cannot detect deepfakes
but think they can. Available at SSRN 3832978.
Andrei OJ Kwok and Sharon GM Koh. 2021. Deep-
fake: A social construction of technology perspec-
tive. Current Issues in Tourism, 24(13):1798–1802.
Kevin LaGrandeur. 2021. How safe is our reliance
on ai, and should we regulate it? AI and Ethics,
1(2):93–99.
Jack Langa. 2021. Deepfakes, real consequences:
Crafting legislation to combat threats posed by deep-
fakes. BUL Rev., 101:761.
Johannes Langguth, Konstantin Pogorelov, Ste-
fan Brenner, Petra Filkukov´
a, and Daniel Thilo
Schroeder. 2021. Don’t trust your eyes: Image ma-
nipulation in the age of deepfakes. Frontiers in
Communication, 6:26.
YoungAh Lee, Kuo-Ting Huang, Robin Blom, Rebecca
Schriner, and Carl A Ciccarelli. 2021. To believe or
not to believe: framing analysis of content and audi-
ence response of top 10 deepfake videos on youtube.
Cyberpsychology, Behavior, and Social Networking,
24(3):153–158.
Sophie Maddocks. 2020. ‘a deepfake porn plot in-
tended to silence me’: exploring continuities be-
tween pornographic and ‘political’deep fakes. Porn
Studies, 7(4):415–423.
Artem A Maksutov, Viacheslav O Morozov, Alek-
sander A Lavrenov, and Alexander S Smirnov. 2020.
Methods of deepfake detection based on machine
learning. In 2020 IEEE Conference of Russian
Young Researchers in Electrical and Electronic En-
gineering (EIConRus), pages 408–411. IEEE.
Karolina Mania. 2020. The legal implications and
remedies concerning revenge porn and fake porn:
A common law perspective. Sexuality & Culture,
24(6):2079–2097.
David Moher, Larissa Shamseer, Mike Clarke, Davina
Ghersi, Alessandro Liberati, Mark Petticrew, Paul
Shekelle, and Lesley A Stewart. 2015. Preferred
reporting items for systematic review and meta-
analysis protocols (prisma-p) 2015 statement. Sys-
tematic reviews, 4(1):1–9.
Gillian Murphy and Emma Flynn. 2021. Deepfake
false memories. Memory, pages 1–13.
Nicholas O’Donnell. 2021. Have we no decency? sec-
tion 230 and the liability of social media companies
for deepfake videos. U. Ill. L. Rev., page 701.
Axel Oehmichen, Kevin Hua, Julio Amador D´
ıaz
L´
opez, Miguel Molina-Solana, Juan Gomez-
Romero, and Yi-ke Guo. 2019. Not all lies are equal.
a study into the engineering of political misinforma-
tion in the 2016 us presidential election. IEEE Ac-
cess, 7:126305–126314.
Carl ¨
Ohman. 2019. Introducing the pervert’s dilemma:
a contribution to the critique of deepfake pornogra-
phy. Ethics and Information Technology, pages 1–8.
Konstantin A Pantserev. 2020. The malicious use of ai-
based deepfake technology as the new threat to psy-
chological security and political stability. In Cyber
defence in the age of AI, smart societies and aug-
mented humanity, pages 37–55. Springer, Cham.
Miroslava Pavl´
ıkov´
a, Barbora ˇ
Senk`
yˇrov´
a, and Jakub
Drmola. 2021. Propaganda and disinformation go
online. Challenging Online Propaganda and Disin-
formation in the 21st Century, pages 43–74.
Adnan Qayyum, Junaid Qadir, Muhammad Umar Jan-
jua, and Falak Sher. 2019. Using blockchain to rein
in the new post-truth world and check the spread of
fake news. IT Professional, 21(4):16–24.
Md Shohel Rana and Andrew H Sung. 2020. Deep-
fakestack: A deep ensemble-based learning tech-
nique for deepfake detection. In 2020 7th IEEE
International Conference on Cyber Security and
Cloud Computing (CSCloud)/2020 6th IEEE Inter-
national Conference on Edge Computing and Scal-
able Cloud (EdgeCom), pages 70–75. IEEE.
Claudia Ratner. 2021. When “sweetie” is not so sweet:
Artificial intelligence and its implications for child
pornography. Family Court Review, 59(2):386–401.
Colonel Prof Filofteia Repez and Maria-Magdalena
Popescu. 2020. Social media and the threats against
human security deepfake and fake news. Romanian
Military Thinking, (4).
Jean-Marc Rickli and Marcello Ienca. 2021. The se-
curity and military implications of neurotechnology
and artificial intelligence. Clinical Neurotechnology
Meets Artificial Intelligence: Philosophical, Ethical,
Legal and Social Implications, page 197.
Helen Ronsner. 2021. The ethics of a deepfake anthony
bourdain voice.
Saniat Javid Sohrawardi, Akash Chintha, Bao Thai,
Sovantharith Seng, Andrea Hickerson, Raymond
Ptucha, and Matthew Wright. 2019. Poster: To-
wards robust open-world detection of deepfakes. In
Proceedings of the 2019 ACM SIGSAC Conference
on Computer and Communications Security, pages
2613–2615.
Eliza Strickland. 2018. Ai-human partnerships tackle”
fake news”: Machine learning can get you only so
far-then human judgment is required-[news]. IEEE
Spectrum, 55(9):12–13.
Eliza Strickland. 2019. Facebook takes on deepfakes.
IEEE Spectrum, 57(1):40–57.
Catherine Stupp. 2019. Fraudsters used ai to mimic
ceo’s voice in unusual cybercrime case. The Wall
Street Journal, 30(08).
Bryan C Taylor. 2021. Defending the state from digital
deceit: the reflexive securitization of deepfake. Crit-
ical Studies in Media Communication, 38(1):1–17.
Japan Times. 2020. Two men arrested over deepfake
pornography videos.
Karin Wahl-Jorgensen and Matt Carlson. 2021. Con-
jecturing fearful futures: Journalistic discourses on
deepfakes. Journalism Practice, pages 1–18.
Mika Westerlund. 2019. The emergence of deepfake
technology: A review. Technology Innovation Man-
agement Review, 9(11).
Jeffrey Westling. 2019. Are deep fakes a shallow con-
cern? a critical analysis of the likely societal reac-
tion to deep fakes. A Critical Analysis of the Likely
Societal Reaction to Deep Fakes (July 24, 2019).
Digvijay Yadav and Sakina Salmani. 2019. Deepfake:
A survey on facial forgery technique using genera-
tive adversarial network. In 2019 International Con-
ference on Intelligent Computing and Control Sys-
tems (ICCS), pages 852–857. IEEE.
Waheeb Yaqub, Otari Kakhidze, Morgan L Brockman,
Nasir Memon, and Sameer Patil. 2020. Effects of
credibility indicators on social media news sharing
intent. In Proceedings of the 2020 chi conference on
human factors in computing systems, pages 1–14.
G Pascal Zachary. 2020. Digital manipulation and the
future of electoral democracy in the us. IEEE Trans-
actions on Technology and Society, 1(2):104–112.
Zsolt Ziegler. 2021. Michael pol´
anyi’s fiduciary pro-
gram against fake news and deepfake in the digital
age. AI & SOCIETY, pages 1–9.
... A systematic review using network analysis carried out by Gamage and colleagues [20] on the societal implications of deepfake technology highlighted the diversity of possible harms and threats to society in the literature. They found that security-related harms were dominant of these, with psychological, legal, and political harms also emerging. ...
Article
Full-text available
Deepfakes are a form of synthetic media that uses deep-learning technology to create fake images, video, and audio. The emergence of this technology has inspired much commentary and speculation from academics across a range of disciplines, who have contributed expert opinions regarding the implications of deepfake proliferation on fields such as law, politics, and entertainment. A systematic scoping review was carried out to identify, assemble, and critically analyze those academic narratives. The aim is to build on and critique previous attempts at defining the technology and categorizing the harms and benefits of deepfake technology. A range of databases were searched for relevant articles from 2017 to 2023, resulting in a large multi-disciplinary dataset of 102 papers, 181,659 words long, which were analyzed qualitatively through thematic analysis. Implications for future research include questioning the lack of research evidence for the supposed positives of deepfakes, recognizing the role that identity plays in deepfake technology, challenging the perceived accessibility/ believability of deepfakes, and proposing a more nuanced approach to the dichotomous “positive and negatives” of deepfakes. Furthermore, we show how definitional issues around what a deepfake is versus other forms of fake media feeds confusion around the novelty and impacts of deepfakes.
... As such, the technology may be used to create highly convincing fake images without consent, including pornographic images (Kirchengast, 2020). Recent studies have delved into the psychological and societal impacts of deepfakes (Gamage et al., 2021), investigating how exposure to manipulated media affects individuals' trust in visual information and the broader implications for public opinion and decision-making. Deepfake ads possess the ability to modify the significance and interpretation of individual rights, thereby influencing customer attitudes and responses (Buo, 2020). ...
Chapter
Artificial intelligence's rapid advancements are transforming marketing and consumer communication, like shifting from traditional influencers to AI-powered virtual influencers. Despite concerns about cyber-harassment and fraud, optimism prevails as netizens adopt technology. However, this technological advancement has opened numerous opportunities for marketers and practitioners to utilize it optimistically and within legal boundaries. So, it is crucial to synthesize the insights on how deepfake technologies can revolutionize marketing, particularly in enhancing consumer engagement, brand communication, and interaction strategies. Therefore, this chapter provides a comprehensive overview of the potential use of deepfake technologies in consumer marketing, emphasizing the ethical considerations that must be addressed, including transparency, consumer perceptions, and legal frameworks. The Nobel insights in this chapter will facilitate consumers, marketers, practitioners, researchers, and stakeholders, guiding them towards future directions.
... Concerns of dual use of Artificial Intelligence (AI) have been discussed by prior work (e.g., Shankar and Zare, 2022;Kania, 2018;Schmid et al., 2022;Urbina et al., 2022;Ratner, 2021;Gamage et al., 2021). However, NLP technologies are rarely included in such considerations. ...
Article
Full-text available
This study examined the impact of deepfakes on consumer protection behaviour and psychosocial responses, focusing on threats and coping appraisals in deepfake marketing. The study applied a two-theory framework combining the Theory of Planned Behavior and the Protection Motivation Theory. Data from 317 adult consumers were collected using a structured questionnaire. Scales were adapted from prior research, and analysis was conducted using Partial Least Squares Structural Equation Modeling (Smart-PLS 4.0 software). The results revealed that threats, attitudes, and subjective norms significantly influenced protective behaviour, while perceived behavioural control did not. Perceived severity and susceptibility significantly affected attitude and motivation to comply impacted consumers’ subjective norms. Perceived Response Efficacy, Self-Efficacy, and Perceived Response Cost were not supported as drivers of perceived behavioural control. This research on consumers’ threat and coping appraisals of deepfake technology offers key insights. It advances consumer behaviour theories in information systems, aids stakeholders like companies and marketers, and supports policymakers in developing regulations and safeguards against deepfake threats.
Preprint
Full-text available
This is the interim publication of the first International Scientific Report on the Safety of Advanced AI. The report synthesises the scientific understanding of general-purpose AI -- AI that can perform a wide variety of tasks -- with a focus on understanding and managing its risks. A diverse group of 75 AI experts contributed to this report, including an international Expert Advisory Panel nominated by 30 countries, the EU, and the UN. Led by the Chair, these independent experts collectively had full discretion over the report's content.
Technical Report
Full-text available
The International Scientific Report on the Safety of Advanced AI interim report sets out an up-to-date, science-based understanding of the safety of advanced AI systems. The independent, international, and inclusive report is a landmark moment of international collaboration. It marks the first time the international community has come together to supports efforts to build a shared scientific and evidence-based understanding of frontier AI risks. The intention to create such a report was announced at the AI Safety Summit in November 2023 This interim report is published ahead of the AI Seoul Summit to be held next week. The final report will be published in advance of the AI Action Summit to be held in France. The interim report restricts its focus to a summary of the evidence on general-purpose AI, which have advanced rapidly in recent years. The report synthesises the evidence base on the capabilities of, and risks from, general-purpose AI and evaluates technical methods for assessing and mitigating them.
Research
Full-text available
Deepfakes are a form of digital manipulation of audio or visual data. It is a form of cybercrime and often involves replicating an entities identity using various technological tools to disseminate false information. Creating deepfakes has never been more easier and prevalent as with the increase in digital penetration and easy access to Artificial Intelligence (AI), Photoshop, and Machine Learning (ML) software which are extensively employed to create convincing and authentic replica of videos and audio clips. Tweaking and manipulating existing images, videos and audio of individuals from social media and other platforms, cybercriminals produce content that is challenging to distinguish from reality in order to malign a person’s character and spread false information. There are several reports which talk about the perils and dangers of deepfake technology in the modern era where technology has become really accessible and people often fall prey to the ill ambitions and maligned objectives of cyber criminals. The vastness, variety and veracity of crimes involving deepfakes make it really difficult to be regulated. But obviously this doesn’t mean to call for an outright ban on the use of the technology as such would not be feasible; and the boons which the technology has to offer is something that can’t be disregarded. In one such instance showcasing the marvel of this technology would be to talk about Malaria Must Die campaign where David Beckham (English Footballer) delivered an awareness program in 9 different languages. In many such arenas, deepfake technology is already being utilized , for e.g. Government schemes, interviews, and campaigns being launched in vernacular languages, etc. It becomes imperative for the legal framework to evolve to contain and regulate the ever growing technological advancements. This paper would be talking about the rise of Deepfake technology and the mandate to regulate it. The paper would touch upon the problems that law enforcement agencies and the judiciary would face to regulate such content and also provides practical, evidence based and forensic solutions to detect Deepfakes. Laws at present in India and how they can provide remedy in cases involving deepfakes has also been discussed. Lastly the paper talks about platform responsibility and a greater cooperation between law enforcement and IT firms to regulate Deepfake technology
Article
Full-text available
We are in the midst of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. Artificial Intelligence (AI) promises to transform many aspects of our society and economy. There is broad scientific consensus that the capabilities of AI systems have progressed rapidly on many tasks in the last five years. Large Language Models (LLMs) are a particularly salient example. In 2019, GPT-2, then the most advanced LLM, could not reliably produce a coherent paragraph of text and could not always count to ten. At the time of writing, the most powerful LLMs like Claude 3, GPT-4, and Gemini Ultra can engage consistently in multi-turn conversations, write short computer programs, translate between multiple languages, score highly on university entrance exams, and summarise long documents. This step-change in capabilities, and the potential for continued progress, could help advance the public interest in many ways. Among the most promising prospects are AI’s potential for education, medical applications, research advances in a wide range of fields, and increased innovation leading to increased prosperity. This rapid progress has also increased awareness of the current harms and potential future risks associated with the most capable types of AI.
Article
Full-text available
We review the phenomenon of deepfakes, a novel technology enabling inexpensive manipulation of video material through the use of artificial intelligence, in the context of today’s wider discussion on fake news. We discuss the foundation as well as recent developments of the technology, as well as the differences from earlier manipulation techniques and investigate technical countermeasures. While the threat of deepfake videos with substantial political impact has been widely discussed in recent years, so far, the political impact of the technology has been limited. We investigate reasons for this and extrapolate the types of deepfake videos we are likely to see in the future.
Article
Full-text available
This paper argues that Michael Polányi’s account of how science, as an institution, establishes knowledge can provide a structure for a future institution capable of countering misinformation, or fake news, and deepfakes. I argue that only an institutional approach can adequately take up the challenge against the corresponding institution of fake news. The fact of filtering news and information may be bothering. It is the threat of censorship and free speech limitation. Instead, I propose that we should indicate reliable information with a trademark and news signing-approved information and brand equity. I offer a method of creating a standard for online news that people can rely on (similar to high-quality shopping products).
Article
Full-text available
The proliferation of unconventional weapons triggered by technological progress have raised the threat level for individual and society through the uncertainty generated by managing words and images, through AI algorithms and through social engineering techniques. The rapid growth of social media has allowed for the information to be disseminated and identified in a reckless manner. Deepfake, centred on economic or political attacks, and fake news, acting on democracy and social systems, are the products that state and non-state actors use, which are aimed at weakening security in general and human security in particular. Deepfake and fake news as main instruments for hybrid warfare have become important topics for security culture. In terms of human security, social media generates series of advantages, however, whatever is developed through the hostile use of these networks generated series of threats for societies. Starting from these aspects, the present paper provides, along with conceptual definitions, a general understanding on the implications that social media challenges have on human security.Keywords: social networks; deepfake; fake news; security culture; human security;
Article
Full-text available
Using artificial intelligence, it is becoming increasingly easy to create highly realistic but fake video content - so-called deepfakes. As a result, it is no longer possible always to distinguish real from mechanically created recordings with the naked eye. Despite the novelty of this phenomenon, regulators and industry players have started to address the risks associated with deepfakes. Yet research on deepfakes is still in its infancy. This paper presents findings from a systematic review of English-language deepfake research to identify salient discussions. We find that, to date, deepfake research is driven by computer science and law, with studies focusing on deepfake detection and regulation. While a number of studies address the potential of deepfakes for political disinformation, few have examined user perceptions of and reactions to deepfakes. Other notable research topics include challenges to journalistic practices and pornographic applications of deepfakes. We identify research gaps and derive implications for future communication studies research.
Article
Machine-learning has enabled the creation of “deepfake videos”; highly-realistic footage that features a person saying or doing something they never did. In recent years, this technology has become more widespread and various apps now allow an average social-media user to create a deepfake video which can be shared online. There are concerns about how this may distort memory for public events, but to date no evidence to support this. Across two experiments, we presented participants (N = 682) with fake news stories in the format of text, text with a photograph or text with a deepfake video. Though participants rated the deepfake videos as convincing, dangerous, and unethical, and some participants did report false memories after viewing deepfakes, the deepfake video format did not consistently increase false memory rates relative to the text-only or text-with-photograph conditions. Further research is needed, but the current findings suggest that while deepfake videos can distort memory for public events, they may not always be more effective than simple misleading text.
Article
The production of child pornography using Artificial Intelligence is poised to potentially evade current laws protecting child abuse. Artificial Intelligence “DeepFakes” can be used to create indistinguishable videos and images of child abuse, without actual child abuse ever occurring. This Note proposes two solutions for curbing this inevitable dilemma. First, Artificial Intelligence should fall under the “computer‐generated” terminology found in the 18 U.S.C. § 2256(8) definition of child pornography. Second, if Artificial Intelligence cannot be considered to fall under that definition, then 18 U.S.C. § 2256(8) should be amended to include “Artificial Intelligence‐generation.”
Article
Hyper-realistic manipulation of audio-visual content, i.e., deepfakes, presents a new challenge for establishing veracity of online content. Research on the human impact of deepfakes, addressing both behaviors in response to and cognitive processing of deepfakes, remains sparse. In a pre-registered behavioral experiment ( N = 210), we show that (a) people cannot reliably detect deepfakes, and (b) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (c) people are biased towards mistaking deepfakes as authentic videos (rather than vice versa) and (d) overestimate their own detection abilities. Together, these results suggest that people adopt a ``seeing-is-believing'' heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by inauthentic deepfake content.
Article
On the internet surfers jettison much of their social restraint, confronting, and correcting perfect strangers. It leads to, for example, edit wars on Wikipedia, condescending insults on Reddit, and righteous putdowns on Twitter. This behavior invites plenty of legal analysis, angry editorializing, and technological proposals, but rarely economic analysis. The author addresses that gap and considers the question of "What economic factors make confrontational conversation more or less likely in our era?" the increasing frequency of breakaways is a symptom that they are becoming cheaper to build. Ergo, we should expect mainstream sites to face increasing pressures towards fragmentation. He concludes that the trends toward fragmentation worries anyone who wants to maintain civil society. Who will encourage the confrontations that settle the public conversations? Most worrisome, misinformation, and deep fakes are becoming more widespread in breakaway communities, and especially on the dark web. Right now, most users of deepfakes entertain themselves (you do not really want to know the details.), but, as with any frontier software, it will become mainstream soon enough. As deepfakes become more common, who will adjudicate whether a deepfake of a politician or celebrity is real or not? How can anybody do that if online users have sorted themselves into various groups that do not trust one another?
Article
Deepfakes may refer to algorithmically synthesized material wherein the face of a person is superimposed onto another body. To date, most deepfakes found online are pornographic, with the people depicted in them rarely consenting to their creation and publicization. Deepfakes leave anyone with an online presence vulnerable to victimization. As a testament to policy often being reactionary to antisocial behavior, current Canadian legislation offers no clear recourse to those who are victimized by deepfake pornography. We aim to provide a critical review of the legal mechanisms and remedies in place, including criminal charges, defamation, copyright infringement laws, and injunctive relief that could be applied in deepfake pornography cases. To combat deepfake pornography, we suggest current laws to be expanded to include language specific to falsely created pornography without the explicit consent of all depicted persons. We also discuss the extent to which host websites are responsible for vetting the uploaded content on their platforms. Finally, we present a call for action on a societal and research level to deal with deepfakes and better support victims of deepfake pornography.
Article
This research examines (a) the negative impact of disinformation including a deepfake video and (b) the protective effect of media literacy education. We conducted an experiment using a two disinformation message type (deepfake video present vs. absent) by three media literacy education (general disinformation vs. deepfake-specific vs. no literacy) factorial design. In the general disinformation (vs. deepfake-specific) literacy condition, participants were informed about (a) the definition of disinformation (vs. deepfake), (b) some examples of disinformation (vs. deepfake), and (c) the social consequences of disinformation (vs. deepfake). Results showed that disinformation messages including a deepfake video resulted in greater vividness, persuasiveness, credibility, and intent to share the message. Media literacy education reduced the effects of disinformation messages.