ArticlePDF Available

Understanding the role of fear of missing out and deficient self-regulation in sharing of deepfakes on social media: Evidence from eight countries

Frontiers
Frontiers in Psychology
Authors:

Abstract and Figures

Deepfakes are a troubling form of disinformation that has been drawing increasing attention. Yet, there remains a lack of psychological explanations for deepfake sharing behavior and an absence of research knowledge in non-Western contexts where public knowledge of deepfakes is limited. We conduct a cross-national survey study in eight countries to examine the role of fear of missing out (FOMO), deficient self-regulation (DSR), and cognitive ability in deepfake sharing behavior. Results are drawn from a comparative survey in seven South Asian contexts (China, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam) and compare these findings to the United States, where discussions about deepfakes have been most relevant. Overall, the results suggest that those who perceive the deepfakes to be accurate are more likely to share them on social media. Furthermore, in all countries, sharing is also driven by the social-psychological trait – FOMO. DSR of social media use was also found to be a critical factor in explaining deepfake sharing. It is also observed that individuals with low cognitive ability are more likely to share deepfakes. However, we also find that the effects of DSR on social media and FOMO are not contingent upon users’ cognitive ability. The results of this study contribute to strategies to limit deepfakes propagation on social media.
Content may be subject to copyright.
Frontiers in Psychology 01 frontiersin.org
Understanding the role of fear of
missing out and deficient
self-regulation in sharing of
deepfakes on social media:
Evidence from eight countries
SaifuddinAhmed
1
*, SherylWeiTingNg
2 and
Adeline Wei TingBee
1
1 Wee Kim Wee School for Communication and Information, Nanyang Technological University,
Singapore, Singapore, 2 Department of Communications and New Media, National University of
Singapore, Singapore, Singapore
Deepfakes are a troubling form of disinformation that has been drawing increasing
attention. Yet, there remains a lack of psychological explanations for deepfake
sharing behavior and an absence of research knowledge in non-Western contexts
where public knowledge of deepfakes is limited. Weconduct a cross-national
survey study in eight countries to examine the role of fear of missing out (FOMO),
deficient self-regulation (DSR), and cognitive ability in deepfake sharing behavior.
Results are drawn from a comparative survey in seven South Asian contexts (China,
Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam) and compare
these findings to the UnitedStates, where discussions about deepfakes have been
most relevant. Overall, the results suggest that those who perceive the deepfakes
to beaccurate are more likely to share them on social media. Furthermore, in all
countries, sharing is also driven by the social-psychological trait – FOMO. DSR
of social media use was also found to bea critical factor in explaining deepfake
sharing. It is also observed that individuals with low cognitive ability are more
likely to share deepfakes. However, wealso find that the eects of DSR on social
media and FOMO are not contingent upon users’ cognitive ability. The results of
this study contribute to strategies to limit deepfakes propagation on social media.
KEYWORDS
deepfakes, disinformation, FOMO, self-regulation, cognitive ability, sharing, self control,
Asia
1. Introduction
Experts have recently warned against the dangers of deepfakes, a form of disinformation
created by articial intelligence. Specically, deepfakes are highly realistic but synthetically
generated video or audio representations of individuals created using articial intelligence
(Westerlund, 2019). ey are oen more striking, persuasive, and deceptive compared to text-
based disinformation (Hameleers etal., 2020). erefore, the potential dangers of deepfakes have
drawn signicant academic attention, with several scholars studying public engagement with
deepfakes and their consequences (Brooks, 2021; Ahmed, 2021a).
However, there is little evidence on why users share deepfakes on social media. Some studies
suggest that users with high political interests and those with low cognitive ability are more likely
OPEN ACCESS
EDITED BY
David J. Robertson,
University of Strathclyde,
UnitedKingdom
REVIEWED BY
Antonio Aquino,
University of Studies G. d'Annunzio
Chieti and Pescara,
Italy
Hong Cai,
University of Macau,
China
*CORRESPONDENCE
Saifuddin Ahmed
sahmed@ntu.edu.sg
SPECIALTY SECTION
This article was submitted to
Cognition,
a section of the journal
Frontiers in Psychology
RECEIVED 19 December 2022
ACCEPTED 02 February 2023
PUBLISHED 07 March 2023
CITATION
Ahmed S, Ng SWT and Bee AWT (2023)
Understanding the role of fear of missing out
and deficient self-regulation in sharing of
deepfakes on social media: Evidence from
eight countries.
Front. Psychol. 14:1127507.
doi: 10.3389/fpsyg.2023.1127507
COPYRIGHT
© 2023 Ahmed, Ng and Bee. This is an open-
access article distributed under the terms of
the Creative Commons Attribution License
(CC BY). The use, distribution or reproduction
in other forums is permitted, provided the
original author(s) and the copyright owner(s)
are credited and that the original publication in
this journal is cited, in accordance with
accepted academic practice. No use,
distribution or reproduction is permitted which
does not comply with these terms.
TYPE Brief Research Report
PUBLISHED 07 March 2023
DOI 10.3389/fpsyg.2023.1127507
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 02 frontiersin.org
to share deepfakes (Ahmed, 2021b). Still, there is a lack of
psychological explanations for this behavior. For instance, while some
studies have explored the role of cognitive ability in reducing the
spread of general misinformation (Apuke etal., 2022), there is a lack
of research on its eect on deepfake sharing behavior.
Moreover, most current research on deepfakes is based in Western
democratic contexts where the general awareness about deepfakes
may behigher because they have been featured more heavily in the
public discourse, like in the UnitedStates. However, it is unclear how
these ndings would apply in non-Western contexts where public
knowledge of deepfakes is limited.
To gain a more complete understanding of why users share
deepfakes on social media, it is necessary to examine the role of
psychological traits and cognitive ability in deepfake sharing behavior
across multiple contexts. More precisely, we focus on a set of
psychological (e.g., fear of missing out) and cognitive factors (e.g.,
cognitive ability) that can help explain deepfake sharing on social
media. Further, this study presents the results of a cross-national
comparative survey on deepfake sharing behavior in seven South
Asian contexts (China, Indonesia, Malaysia, Philippines, Singapore,
ailand, and Vietnam) and compares these ndings to the
UnitedStates, where discussions about deepfakes have been most
relevant. e cross-national design of the study increases the
generalizability of the ndings. e specic goals of the study are
outlined below.
In the rst step, weexamine the role of fear of missing out
(FOMO) in deepfake sharing behavior. FOMO is the psychological
anxiety that one might bele out of exciting exchanges in their
social circles (Przybylski etal., 2013). Some scholars found that
FOMO is positively associated with sharing fake news online (Talw ar
etal., 2019), while others have found it insignicant in predicting
fake news sharing (Balakrishnan et al., 2021). ese conicting
results highlight the need to clarify the inuence of FOMO on
misinformation sharing. Nevertheless, regarding deepfakes
specically, FOMO has been found to positively predict intentional
deepfake sharing (Ahmed, 2022). However, Ahmed (2022) focused
on intentional sharing behavior in two technologically advanced
contexts: UnitedStates and Singapore. It is unclear whether such
ndings would bereplicated in the less technologically advanced
countries westudy in this paper. However, if participants in these
countries emulate the same processes as social media users in the
UnitedStates and Singapore, wewould expect levels of FOMO to
be associated with deepfake sharing behavior. Given the role of
FOMO in existing literature, we hypothesize that FOMO will
bepositively associated with the sharing of deepfakes (H1).
Next, this study evaluates the eect of self-regulation on deepfake
sharing behavior. Self-regulation is “the process of self-control through
the subfunctions of self-monitoring, judgmental process, and self-
reaction” (LaRose etal., 2003, p.232). With sucient self-regulation,
individuals can modulate their behaviors through self-observation.
Conversely, decient self-regulation (DSR), when conscious self-
control is weakened (LaRose etal., 2003), manifests in behavioral
addictions, such as an addiction to the internet, through a lack of
control over impulses (Vally, 2021). Prior studies have reported that
DSR signicantly predicted unveried information sharing (Islam
et al., 2020). DSR is also associated with social media fatigue
(Islam et al., 2020; Vally, 2021). In turn, social media fatigue is
positively associated with sharing fake news online (Talwar et al.,
2019). Hence, wehypothesize that DSR will bepositively associated
with the sharing of deepfakes (H2).
It is also essential to investigate the role of cognitive ability in the
sharing of deepfakes as it can provide insight into how individuals
make decisions about sharing potentially harmful content. Cognitive
ability, which refers to an individual’s mental capacity for problem-
solving, decision-making, and learning, can inuence how individuals
process information and make decisions. An aspect of cognitive ability
that is vital to engagement with deepfakes is “the ability or the
motivation to think analytically” (Ahmed, 2021b, p.3). Prior research
on the association of cognitive ability with deepfake perception has
reported that individuals with higher cognitive ability are less likely to
engage in deepfake sharing because they have better discernment and
decision-making abilities (Ahmed, 2021b). is may be because
individuals with high cognitive ability are known to make sound
judgments and are inclined toward problem-solving (Apuke etal.,
2022). As such, individuals might perform better at tasks that require
reasoning and assessment—such as discerning falsehoods from the
truth (Nurse etal., 2021). Although high cognitive ability does not
suggest that an individual is infallible to misinformation (Apuke etal.,
2022), cognitive ability appears to confer some advantages for
navigating misinformation. Given the robustness of cognitive ability
in safeguarding users in misinformation engagement, wehypothesize
that cognitive ability will benegatively associated with the sharing of
deepfakes (H3).
Finally, given that cognitive ability can inuence deepfake sharing,
wealso explore whether the eects of FOMO and DSR are contingent
upon individuals’ cognitive ability. Weanticipate that cognitive ability
might act as a buer against individual traits such as FOMO and DSR
in deepfake sharing. erefore, wepose a research question: how
would cognitive ability moderate the association between (a) FOMO
and (b) DSR and the sharing of deepfakes (RQ1)?
Investigating the moderating eect of cognitive ability on the
relationship between FOMO, DSR, and deepfake sharing can provide
insight into how individuals make decisions about sharing potentially
harmful content. Such a study can also inform interventions and
strategies to reduce the spread of harmful deepfake content. erefore,
wereport a cross-national comparative study that uses online panel
survey data from eight countries to test the relationships between
FOMO, DSR, cognitive ability, and deepfake sharing. Wealso tested
for the moderating role of cognitive ability in the relationships
mentioned above. Overall, this study contributes to the growing
literature on user engagement with disinformation and will help us
understand the critical psychological underpinnings of
deepfake sharing.
2. Materials and methods
2.1. Participants
We contracted Qualtrics LLC, a survey research rm agency, to
conduct surveys in eight countries, including the UnitedStates, China,
Singapore, Indonesia, Malaysia, Philippines, ailand, and Vietnam.
Weused a quota sampling approach to match the sample to population
parameters focusing on age and gender quotas. is was done to
generalize our ndings to the national adult population. e surveys
were conducted concurrently in June 2022 and were translated into
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 03 frontiersin.org
national languages. Wegathered 1,008 participants (on average) per
country – UnitedStates (N= 1,010), China (N = 1,010), Singapore
(N= 1,008), Indonesia (N= 1,010), Malaysia (N= 1,002), Philippines
(N= 1,010), ailand (N= 1,010), and Vietnam (N= 1,010). e study
was approved by the institutional review board at Nanyang
Technological University.
2.2. Measurements
Deepfake sharing was measured using four deepfake video stimuli
(see Supplementary Table A1). e deepfakes were chosen based on
their virality on social media. ey included a mix of political (e.g.,
Mark Zuckerberg and Vladimir Putin) and entertainment deepfakes
(e.g., Tom Cruise and Kim Kardashian). is approach enhances the
external validity of our design. Moreover, other studies have also used
some of these deepfakes (see Cochran and Napshin, 2021;
Ahmed, 2021a).
For each deepfake video, weensured that participants were able
to play it in full-screen mode. Wealso asked participants if they could
play the video le and successfully watched the video. Wethen asked
how likely they were to share that video on social media with others.
Participants responded on a 5-point scale ranging from 1 = extremely
to 5 = not at all. Wethen reverse-coded and averaged their responses
across the four stimuli. A higher score represents greater sharing.
Perceived accuracy was measured using the same four deepfake
stimuli. Since perceived accuracy has been found to closely relate to
sharing disinformation (Ahmed, 2021a; t'Serstevens et al., 2022),
wehave included this as a critical covariate in our analyses. For each
deepfake, weasked participants how accurate was the central claim
presented in the video (e.g., for the Mark Zuckerberg deepfake
we asked, “how accurate is the claim that Mark Zuckerberg said
whoever controls the data, controls the future”). Participants
responded on a 5-point scale ranging from 1 = not at all accurate, to
5 = extremely accurate. e response for the perceived accuracy of four
deepfakes was averaged to create a scale of perceived accuracy.
Self-regulation (decient) was measured using a ve-item scale
adapted from LaRose and Eastin (2004). Items included “I sometimes
try to hide how much time Ispend on social media from my family or
friends,” and “I feel my social media use is out of control.” among
others. Participants responded on a 5-point scale ranging from
1 = strongly disagree to 5 = strongly agree. e items were averaged to
create an index of DSR.
Fear of missing out was measured using a previously validated
10-item scale (Przybylski etal., 2013). Example items include “I get
worried when Ind out my friends are having fun without me,” “It
bothers me when Imiss an opportunity to meet up with friends.
Participants responded on a 5-point scale ranging from 1 = not at all
true of me and 5 = extremely true of me. e items were averaged to
create an index of FOMO.
Cognitive ability was measured by the wordsum test. It includes 10
vocabulary words in which participants are required to nd the closest
synonym from ve options. is test is well-established and has been
used widely in literature, including deepfake and fake news studies
(orndike, 1942; Wechsler, 1958; Ganzach et al., 2019;
Ahmed, 2021b).
e descriptive for the variables above can befound in Table1. All
variables met satisfactory reliability in each context (except for
cognitive ability in Indonesia, discussed below). See
Supplementary Table B1 for details.
2.3. Covariates
is study includes several covariates that may inuence deepfake
sharing behavior. ese include demographic variables: (a) age, (b)
gender, (c) education, (d) income, (e) social media news consumption,
(f) TV news consumption, (g) radio news consumption, and (h) print
news consumption (see Table1).
2.4. Analysis
We ran hierarchical regression analyses for the eight countries
using SPSS and explored the moderation eects using Hayes (2018)
PROCESS macro for SPSS. Wealso conducted reliability analyses for
the key variables and discovered that cognitive ability had low
reliability in Indonesia. Wethus excluded cognitive ability from our
analysis for Indonesia.
Other than running separate regression models for each country,
wealso ran a pooled regression model. e results are in line with
what is presented in the study (see Supplementary Table C1
for details).
3. Results
e results of the regression analysis are presented in Table2. In
our preliminary analyses, wefound that age was negatively associated
with sharing in all countries – UnitedStates (β = −0.32, p< 0.001),
China (β = −0.14, p < 0.001), Singapore (β = −0.26, p < 0.001),
Indonesia (β = −0.18, p< 0.001), Malaysia (β = −0.21, p < 0.001),
Philippines (β = −0.23, p< 0.001), and ailand (β = −0.29, p< 0.001),
but not Vietnam (β = 0.02, p= 0.44). erefore, suggesting that older
adults tend to share less.
We also found a sex eect where males (males = 0, females = 1in
the regression models) were more likely to share in most contexts. e
results were signicant for the UnitedStates (β = −0.08, p< 0.001),
Singapore (β = −0.10, p< 0.001), Indonesia (β = −0.10, p< 0.001),
Malaysia (β = −0.11, p< 0.001), Philippines (β = −0.13, p< 0.001),
ailand (β = −0.07, p< 0.05), and Vietnam (β = −0.12, p< 0.001). e
only exception being China (β = −0.05, p= 0.11).
Social media news consumption was also a signicant predictor
of sharing behavior in most countries. Individuals who consumed
more social media news were more likely to share in the UnitedStates
(β = 0.19, p< 0.001), China (β = 0.14, p< 0.001), Singapore (β = 0.13,
p < 0.001), Indonesia (β = 0.12, p< 0.001), Philippines (β = 0.16,
p< 0.001), and Vietnam (β = 0.16, p< 0.001). e results for Malaysia
(β = 0.04, p = 0.18) and ailand (β = −0.01, p = 0.76) were
statistically insignicant.
We also found that the perceived accuracy of deepfake was a
strong and positive predictor in all countries, UnitedStates (β = 0.28,
p< 0.001), China (β = 0.32, p< 0.001), Singapore (β = 0.31, p< 0.001),
Indonesia (β = 0.28, p < 0.001), Malaysia (β = 0.26, p < 0.001),
Philippines (β = 0.18, p< 0.001), ailand (β = 0.30, p< 0.001), and
Vietnam (β = 0.30, p< 0.001). In essence, those who thought the
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 04 frontiersin.org
deepfakes were true were more likely to have sharing intentions. ese
ndings are consistent with prior disinformation research
(Ahmed, 2021a).
Next, wefound strong support for H1. FOMO was positively
associated with sharing. is was true for all countries – UnitedStates
(β = 0.25, p< 0.001), China (β = 0.14, p< 0.001), Singapore (β = 0.27,
p < 0.001), Indonesia (β = 0.18, p < 0.001), Malaysia (β = 0.33,
p < 0.001), Philippines (β = 0.22, p< 0.001), ailand (β = 0.28,
p< 0.001), and Vietnam (β = 0.31, p< 0.001).
Next, wefound in ve of eight countries, namely, the UnitedStates
(β = 0.07, p< 0.05), Singapore (β = 0.16, p< 0.001), Indonesia (β =
0.08, p< 0.01), Malaysia (β = 0.07, p< 0.05), and ailand (β = 0.06,
p< 0.05), DSR was positively associated with the sharing of deepfakes.
In other words, those who were more impulsive were more likely to
share deepfakes. us, H2 is supported.
We also found support for H3. ose with higher cognitive ability
were less likely to share deepfake. is was true for UnitedStates
(β = −0.10, p< 0.001), Singapore (β = −0.14, p < 0.001), Malaysia
(β = −0.11, p< 0.001), Philippines (β = −0.10, p < 0.001), ailand
β = −0.05, p< 0.001), and Vietnam (β = −0.16, p< 0.001).
As for RQ1, we did not nd evidence that cognitive ability
moderated the association between (a) FOMO and (b) DSR and
deepfake sharing. e only exception was Singapore (β = −0.33,
p< 0.001); cognitive ability diminished the eects of DSR. In other
words, those who were most decient in self-regulation and had the
least cognitive ability were most likely to share deepfakes (Table2).
However, welargely witness that the impact of DSR and FOMO on
sharing behavior is not contingent upon the cognitive ability
of individuals.
4. Discussion
Most studies have investigated social media sharing for general
mis- and disinformation. is study is a rare attempt at analyzing
sharing associated with deepfakes in eight countries. Overall, the
results suggest that those who perceive the deepfakes to beaccurate
are more likely to share them on social media. Furthermore, in all
countries, sharing is also driven by the social-psychological trait –
FOMO. DSR of social media use was also found to bea critical factor
in explaining sharing of deepfakes. ough, FOMO is a more
consistent predictor than DSR. It is also observed that individuals with
low cognitive ability are more likely to engage in deepfake sharing.
However, we also nd that the eects of DSR on social media and
FOMO are not contingent upon users’ cognitive ability. In sum, the
study identies critical factors associated with the sharing of deepfakes
on social media. e ndings are discussed in detail below.
First, the study provides empirical support to the oen-discussed
relationship between perceived accuracy and sharing of
disinformation. In the wider literature, many have questioned the
eectiveness of agged corrections. is is because of the continued
inuence eect in which people continue to act on their misinformed
beliefs even aer it has been debunked (Lewandowsky etal., 2012;
Ecker and Antonio, 2021). Because the continued inuence eect is
resilient across situations and people, researchers have generally taken
a modest stance towards correcting mis- or disinformation. In view of
this debate, the results of this study suggest that targeting accuracy
perceptions of deepfakes on social media may still help curtail their
propagation. is is in line with a recent study by Ecker and Antonio
(2021), which outlined certain conditions for fake news retraction
ecacy. ey argue that retractions from highly trustworthy and
authoritative sources can mitigate the continued inuence eect.
Similarly, when people are given persuasive reasons to correct their
beliefs, they may beless likely to persist in the misbelief and less likely
to share the deepfake. ough, given the dierence in nature of
disinformation (deepfakes vs. other forms), this remains an area
worth investigating.
Second, wend strong support for FOMO to beassociated with
the sharing of deepfakes. Individuals with high levels of FOMO are
found to besensitive and susceptible to distress due to neglect by
their social media peers (Beyens etal., 2016). It is possible that such
individuals may share deepfakes to gain an opportunity to receive
social acceptance and avoid peer neglect on social media. is is in
line with Talwar et al.’s (2019) explanation using the
TABLE1 Mean, standard deviation of all variables under study.
United
States
China Singapore Indonesia Malaysia Philippines Thailand Vietnam
Male 46.00% 51.50% 54.90% 50.20% 49.20% 51.30% 52.90% 53.20%
Mean (SD) Mean SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean SD) Mean (SD)
Age 49.0 (17.5) 38.0 (12.7) 43.8 (14.0) 40.3 (13.0) 39.9 (13.7) 37.2 (13.6) 41.6 (13.0) 38.4 (12.8)
Education 5.22 (1.21) 5.72 (0.998) 5.32 (1.22) 5.27 (1.06) 5.77 (1.61) 5.38 (1.11) 5.34 (1.17) 5.55 (0.943)
Income 5.79 (3.76) 6.00 (2.27) 5.47 (2.64) 2.81 (2.39) 4.14 (2.51) 3.26 (2.02) 4.09 (1.91) 6.52 (2.03)
SM news 2.90 (1.49) 3.27 (1.22) 3.17 (1.23) 3.49 (1.21) 3.76 (1.21) 4.17 (0.97) 4.21 (1.03) 3.67 (1.07)
TV news 3.35 (1.35) 3.26 (1.17) 3.11 (1.22) 3.13 (1.24) 3.31 (1.23) 3.94 (1.08) 3.68 (1.22) 3.28 (1.09)
Radio news 2.40 (1.28) 2.46 (1.16) 2.44 (1.22) 2.07 (1.09) 2.64 (1.13) 3.07 (1.22) 2.43 (1.19) 2.32 (1.11)
Print news 2.20 (1.29) 2.12 (1.03) 2.51 (1.31) 2.11 (1.15) 2.63 (1.24) 2.59 (1.23) 2.37 (1.11) 2.33 (1.03)
Perceived accuracy 2.86 (1.04) 3.24 (0.76) 2.41 (0.92) 2.94 (0.8) 2.90 (0.89) 2.81 (0.81) 2.88 (0.87) 2.62 (0.88)
DSR 2.27 (1.16) 2.62 (0.95) 2.43 (1.04) 2.73 (0.88) 2.68 (0.9) 2.61 (0.97) 2.93 (0.90) 2.67 (0.97)
FOMO 2.29 (1.06) 2.53 (0.84) 2.13 (0.95) 2.21 (0.82) 2.22 (0.88) 2.33 (0.88) 2.51 (0.89) 2.44 (0.94)
Cognitive ability 5.43 (2.41) 8.00 (2.09) 5.53 (2.30) 4.78 (1.19) 4.47 (1.79) 6.56 (2.14) 5.81 (1.67) 5.23 (1.86)
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 05 frontiersin.org
self-determination theory. In general, the self-determination theory
(SDT) postulates that humans have an innate desire to grow in
understanding and make meaning of itself. SDT also posits that
people tend to rely on social support. In other words, humans nd it
dicult to understand themselves without being in relation with
others. erefore, when their sense of relatedness is compromised,
they may try to reestablish it by sharing exciting content within their
social circle. Moreover, deepfakes are oen intriguing, amusing, and
provoking and may add to the value of their sharing. Previous
evidence also conrms that negative and novel information oen
spreads more rapidly (Vosoughi et al., 2018) – a characteristic of
most deepfakes.
ird, DSR of social media use was positively associated with the
sharing of deepfakes in a majority of contexts. e relationship
between DSR and sharing can beexplained through the fact that when
individuals suer from DSR, they oen engage in behaviors that they
would not perform if they were self-aware and able to employ a certain
level of self-control (LaRose et al., 2003). Here, they may bemore
likely to share deepfakes. Further, individuals with high DSR may have
diculty managing their emotional reactions to deepfakes. As such,
they may feel overwhelmed, and their emotional response could lead
them to share on social media, in an attempt to seek support or
validation from their social network.
Fourth, individuals with low cognitive ability are vulnerable to
deepfake sharing. ese results are consistent with existing literature
and provide support for the generalizability of the relationships across
countries. Individuals with low cognitive ability may have diculty
understanding complex information and applying critical skills in
analyzing the authenticity of deepfakes. As such, they may bemore
susceptible to sharing. Moreover, those with low cognitive ability may
also struggle to understand the potential consequences of
spreading disinformation.
Finally, while weobserve the direct eect of cognitive ability on
sharing, wedo not observe any moderation eects. In general, the results
highlight that individuals with high FOMO and DSR may share, even if
they have high cognitive ability and are capable of critically evaluating
information, thereby lacking restraint. ese patterns conrm that
certain social-psychological traits and problematic social media use may
TABLE2 Hierarchical regression analysis predicting deepfakes sharing.
United
States
China Singapore Indonesia Malaysia Philippines Thailand Vietnam
β β β β β β β β
Age −0.32*** −0.14*** −0.26*** −0.18*** −0.21*** −0.23*** −0.29*** −0.02
Male −0.08*** −0.05 −0.10*** −0.10*** −0.11*** −0.13*** −0.07*−0.12***
Education −0.08** 0.04 −0.04 0.06*−0.07*−0.04 −0.03 −0.05
Income 0.05*0.11** −0.01 0.07*−0.02 0.02 0.05 0.00
SM news 0.19*** 0.14*** 0.13*** 0.12*** 0.04 0.16*** −0.01 0.16***
TV news 0.02 0.07 −0.02 0.09*−0.04 0.06 0.05 −0.08*
Radio news 0.21*** 0.13** 0.19*** 0.17*** 0.18*** 0.08 0.15*** 0.17***
Print news 0.17*** 0.09*0.11** 0.15*** 0.12** 0.12** 0.17*** 0.15***
R20.42*** 0.16*** 0.16*** 0.23*** 0.12*** 0.15*** 0.19*** 0.12***
Step2:
Variables of
interest
Per Accuracy 0.28*** 0.32*** 0.31*** 0.28*** 0.26*** 0.18*** 0.30*** 0.30***
Def self-reg
(DSR)
0.07*−0.01 0.16*** 0.08** 0.07*0.05 0.06*−0.00
FOMO 0.25*** 0.14*** 0.27*** 0.18*** 0.33*** 0.22*** 0.28*** 0.31***
Cog ability
(CA)
−0.10*** −0.05 −0.14*** −0.11*** −0.10** −0.05*−0.16***
R20.17*** 0.12*** 0.32*** 0.13*** 0.23*** 0.11*** 0.22*** 0.26***
Step3:
Moderation
eects
DSR x CA 0.10 −0.07 −0.33*** 0.10 −0.07 0.01 −0.07
FOMO x CA −0.14 0.26 0.12 0.02 0.01 0.19 0.03
R20.002 0.003 0.008*** 0.001 0.001 0.002 0.001
Tot al R20.59 0.28 0.49 0.36 0.35 0.26 0.42 0.38
1. ***p < 0.001; **p < 0.01; *p < 0.05. 2. males = 0, females = 1; 3. cognitive ability was excluded from Indonesia due to low reliability.
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 06 frontiersin.org
override critical thinking. However, future studies could consider using
the need for cognition (Cacioppo and Petty, 1982) rather than cognitive
ability as a moderator. A higher need for cognition can direct individuals
to more cognitive eort toward complex cognitive processes. Further,
previous evidence conrms that the successful implementation of self-
control requires the availability of limited resources (Schmeichel and
Baumeister, 2004). Individuals with a high need for cognition are also
found to exhibit greater self-control (Bertrams and Dickhäuser, 2009).
erefore, it may beworthwhile to examine the inuence of the need for
cognition on not only DSR but also the sharing of deepfakes.
Overall, the factors discussed in this study can help explain why
certain individuals contribute to deepfake propagation on social media.
To prevent the spread of deepfakes, it is essential to promote healthy self-
regulation of social media use and to provide individuals with the tools
and skills necessary to manage their excessive use of social media. In
addition, promoting interventions that develop the critical skills of
individuals with low cognitive ability is also essential. is would
safeguard certain groups from spreading disinformation.
Before weconclude, it is important to acknowledge the limitations
of this study.
First, the study uses cross-sectional data that limits any causal
inferences. While the findings are consistent with previous
research, longitudinal study frameworks are necessary to
establish causality.
Second, while measuring the eects of social media news use, wedid
not consider the dierences among social media platforms (e.g., WhatsApp
vs. Tiktok). Given the dierences in aordances across platforms, deepfake
sharing behavior may likely vary. erefore, it is recommended to consider
platform dierences for more nuanced observations.
ird, our study is based on an online panel of survey respondents.
While weuse quota sampling strategies to enhance the generalizability of
the ndings, the results may not be representative of the overall
population. e online sample characteristics dier from the general
population (see Supplementary Table D1 for distribution). However, our
investigation focused on a form of online behavior (sharing); therefore,
the representativeness of the ndings should beevaluated accordingly.
Fourth, some of our items are single-item (e.g., perceived
accuracy), and overall, weuse survey methods that are restricted by
social desirability biases. Future studies could use unobtrusive data to
observe deepfake sharing behavior on social media.
Notwithstanding the limitations, this study oers insights into
deepfake propagation on social media by highlighting the role played by
perceived accuracy of disinformation, FOMO, DSR of social media use,
and cognitive ability. Within this setting, werecommend policymakers
that any attempt to reduce the spread of deepfakes on social media should
factor in the individual traits of social media users. Intervention programs
are less likely to succeed if generalized assumptions are made about social
media users, not considering the variance in psychological characteristics
and intellectual abilities of audiences.
Data availability statement
e raw data supporting the conclusions of this article will
bemade available by the authors, without undue reservation.
Ethics statement
e studies involving human participants were reviewed and
approved by Institutional Review Board, Nanyang Technological
University. e patients/participants provided their written informed
consent to participate in this study.
Author contributions
SA designed the study, analyzed the data, and wrote the
manuscript. SN wrote the manuscript. AB analyzed the data and wrote
the manuscript. All authors approved the submitted version.
Funding
is work was supported by Nanyang Technological University
grant number 21093.
Conflict of interest
e authors declare that the research was conducted in the
absence of any commercial or nancial relationships that could
beconstrued as a potential conict of interest.
Publisher’s note
All claims expressed in this article are solely those of the
authors and do not necessarily represent those of their affiliated
organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or
claim that may be made by its manufacturer, is not guaranteed or
endorsed by the publisher.
Supplementary material
e Supplementary material for this article can befound online
at: https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1127507/
full#supplementary-material
References
Ahmed, S. (2021a). Fooled by the fakes: cognitive dierences in perceived claim
accuracy and sharing intention of non-political deepfakes. Pers. Individ. Di.
182:111074. doi: 10.1016/j.paid.2021.111074
Ahmed, S. (2021b). Who inadvertently shares deepfakes? Analyzing the role of
political interest, cognitive ability, and social network size. Tel. Inform. 57:101508. doi:
10.1016/j.tele.2020.101508
Ahmed, S. (2022). Disinformation sharing thrives with fear of missing out among low
cognitive news users: a cross-national examination of intentional sharing of deepfakes.
J. Broadcast. Elec. Media. 66, 89–109. doi: 10.1080/08838151.2022.2034826
Apuke, O. D., Omar, B., Tunca, E. A., and Gever, C. V. (2022). Information overload
and misinformation sharing behaviour of social media users: testing the moderating role
of cognitive ability. J. Inform. Sci. 10:1942. doi: 10.1177/01655515221121942
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 07 frontiersin.org
Balakrishnan, V., Ng, K. S., and Rahim, H. A. (2021). To share or not to share – the
underlying motives of sharing fake news amidst the COVID-19 pandemic in Malaysia.
Technol. Soc. 66:101676. doi: 10.1016/j.techsoc.2021.101676
Bertrams, A., and Dickhäuser, O. (2009). High-school students' need for cognition,
self-control capacity, and school achievement: testing a mediation hypothesis. Learn.
Indiv. Dis. 19, 135–138. doi: 10.1016/j.lindif.2008.06.005
Beyens, I., Frison, E., and Eggermont, S. (2016). “I don’t want to miss a thing”: adolescents’
fear of missing out and its relationship to adolescents’ social needs, Facebook use, and
Facebook related stress. Comp. Hum. Behav. 64, 1–8. doi: 10.1016/j.chb.2016.05.083
Brooks, C. F. (2021). Popular discourse around Deepfakes and the interdisciplinary
challenge of fake video distribution. Cyberpsychol. Behav. Soc. Netw. 24, 159–163. doi:
10.1089/cyber.2020.0183
Cacioppo, J. T., and Petty, R. E. (1982). e need for cognition. J. Persn. Soc. Psych. 42,
116–131. doi: 10.1037/0022-3514.42.1.116
Cochran, J. D., and Napshin, S. A. (2021). Deepfakes: awareness, concerns, and
platform accountability. Cyberpsycy. Beh. Soc. Netw. 24, 164–172. doi: 10.1089/
cyber.2020.0100
Ecker, U. K. H., and Antonio, L. M. (2021). Can youbelieve it? An investigation into
the impact of retraction source credibility on the continued inuence eect. Mem. Cogn.
49, 631–644. doi: 10.3758/s13421-020-01129-y
Ganzach, Y., Hanoch, Y., and Choma, B. L. (2019). Attitudes toward presidential
candidates in the 2012 and 2016 American elections: cognitive ability and support for
trump. Soc. Psychol. Personal. Sci. 10, 924–934. doi: 10.1177/1948550618800494
Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., and Bos, L. (2020). A picture
paints a thousand lies? e eects and mechanisms of multimodal disinformation and
rebuttals disseminated via social media. Polit. Commun. 37, 281–301. doi:
10.1080/10584609.2019.1674979
Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process
analysis: A regression-based approach. New York, NY: Guilford Publications.
Islam, A. K. M. N., Laato, S., Talukder, S., and Sutinen, E. (2020). Misinformation
sharing and social media fatigue during COVID-19: an aordance and cognitive load
perspective. Technol. Forecast. Soc. Change. 159:120201. doi: 10.1016/j.techfore.2020.120201
LaRose, R., and Eastin, M. S. (2004). A social cognitive theory of internet uses and
gratications: toward a new model of media attendance. J. Broadcast. Elec. Media. 48,
358–377. doi: 10.1207/s15506878jobem4803_2
LaRose, R., Lin, C. C., and Eastin, M. S. (2003). Unregulated internet usage: addiction,
habit, or decient self-regulation? Media Psychol. 5, 225–253. doi: 10.1207/
S1532785XMEP0503_01
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., and Cook, J. (2012).
Misinformation and its correction: continued inuence and successful debiasing.
Psychol. Sci. Public Interest 13, 106–131. doi: 10.1177/1529100612451018
Nurse, M. S., Ross, R. M., Isler, O., and Van Rooy, D. (2021). Analytic thinking predicts
accuracy ratings and willingness to share COVID-19 misinformation in Australia. Mem.
Cog. 50, 425–434. doi: 10.3758/s13421-021-01219-5
Przybylski, A. K., Murayama, K., DeHaan, C. R., and Gladwell, V. (2013). Motivational,
emotional, and behavioral correlates of fear of missing out. Comp. Hum. Behav. 29,
1841–1848. doi: 10.1016/j.chb.2013.02.014
Schmeichel, B. J., and Baumeister, R. F. (2004). “Self-regulatory strength” in Handbook
of self-regulation: Research, theory, and applications. eds. R. F. B aumeister and K. D. Vohs
(New York, NY: Guilford Press), 84–98.
Talwar, S., Dhir, A., Kaur, P., Zafar, N., and Alrasheedy, M. (2019). Why do people
share fake news? Associations between the dark side of social media use and fake news
sharing behavior. J. Retail. Consum. Serv. 51, 72–82. doi: 10.1016/j.jretconser.
2019.05.026
orndike, R. L. (1942). Two screening tests of verbal intelligence. J. Appl. Psychol. 26,
128–135. doi: 10.1037/h0060053
t'Serstevens, F., Piccillo, G., and Grigoriev, A. (2022). Fake news zealots: eect of
perception of news on online sharing behavior. Front. Ps ychol. 13:859534. doi: 10.3389/
fpsyg.2022.859534
Vally, Z. (2021). “Compliance with health-protective behaviors in relation to
COVID-19: the roles of health-related misinformation, perceived vulnerability, and
personality traits” in Mental health eects of COVID-19. ed. A. A. Moustafa (Cambridge,
MA: Academic Press), 263–281. doi: 10.1016/b978-0-12-824289-6.00001-5
Vosoughi, S., Roy, D., and Aral, S. (2018). e spread of true and false news online.
Science 359, 1146–1151. doi: 10.1126/science.aap9559
Wechsler, D. (1958). e measurement and appraisal of adult intelligence. 4th ed.
Baltimore, MD: Williams and Wilkins. doi:10.1037/11167-000
Westerlund, M. (2019). e emergence of Deepfake technology: a review. Technol.
Inn ov. Man ag. R ev. 9, 39–52. doi: 10.22215/timreview/1282
... Likewise, relatively less informed individuals will be more susceptible to believing fake news. As people perceive misinformation as accurate, they tend to believe more in it and become more willing to share it with others online [14,36], accelerating the dissemination of fake news online. The cyclic nature of this spread of online falsehood is extremely minacious to the heavy consumers of online information. ...
... Cognitive ability has also been found to be positively associated with effective information processing [43], better decision-making [44], more healthy skepticism toward unfounded beliefs [45], and better capabilities in risk assessment pertinent to their decision of where and whether they should place their trust [46]. In terms of sharing intentions, low-cognitive individuals are more likely to share deepfakes [36,41]. This is sufficient to assume that cognitive ability plays a core part in buffering against manipulative deepfakes. ...
... It posits that the richness of presented content does not always translate to a more significant role in influencing perceived accuracy. Recent research also confirms that individuals not only fall for video deepfakes [12,14,36] but also cannot reliably recognize audio deepfakes [54]. ...
Article
Full-text available
This study is one of the first to investigate the relationship between modalities and individuals' tendencies to believe and share different forms of deepfakes (also deep fakes). Using an online survey experiment conducted in the US, participants were randomly assigned to one of three disinformation conditions: video deepfakes, audio deepfakes, and cheap fakes to test the effect of single modality against multimodality and how it affects individuals’ perceived claim accuracy and sharing intentions. In addition, the impact of cognitive ability on perceived claim accuracy and sharing intentions between conditions are also examined. The results suggest that individuals are likelier to perceive video deepfakes as more accurate than cheap fakes, but not audio deepfakes. Yet, individuals are more likely to share video deepfakes than cheap and audio deepfakes. We also found that individuals with high cognitive ability are less likely to perceive deepfakes as accurate or share them across formats. The findings emphasize that deepfakes are not monolithic, and associated modalities should be considered when studying user engagement with deepfakes.
... Over the input data sample, it records the probability distribution of the data. Moreover, the diffusion technique is essentially the Multi-Perceptron adversarial method in reverse, where the learning hierarchy is predicated on introducing noise into multimedia information in order to produce deepfakes 18,19 . ...
... Numerous countermeasures have been offered, each using a different deep learning technique and offering a distinct solution 19,20 . A significant amount of training data-which could take the form of labelled data-is needed for this kind of development. ...
Article
Full-text available
The proliferation of multimedia-based deepfake content in recent years has posed significant challenges to information security and authenticity, necessitating the use of methods beyond dependable dynamic detection. In this paper, we utilize the powerful combination of Deep Generative Adversarial Networks (GANs) and Transfer Learning (TL) to introduce a new technique for identifying deepfakes in multimedia systems. Each of the GAN architectures may be customized to detect subtle changes in different multimedia formats by combining their advantages. A multi-collaborative framework called “MCGAN” is developed because it contains audio, video, and image files. This framework is compared to other state-of-the-art techniques to estimate the overall fluctuation based on performance, improving the accuracy rate by up to 17.333% and strengthening the deepfake detection hierarchy. In order to accelerate the training process overall and enable the system to respond rapidly to novel patterns that indicate deepfakes, TL employs the pre-train technique on the same databases. When it comes to identifying the contents of deepfakes, the proposed method performs quite well. In a range of multimedia scenarios, this enhances real-time detection capabilities while preserving a high level of accuracy. A progressive hierarchy that ensures information integrity in the digital world and related research is taken into consideration in this development.
... • Analysis on public perception regarding • People thinking deepfakes are true are more likely to share them on social media. People with inferior cognitive abilities were found to be more prone to spread deepfakes (Ahmed et al., 2023). • Potential voters perceived even real videos as fake when they were informed about the existence of deepfakes. ...
... Previous research has found that deepfakes are most likely recognized by the users (Groh et al., 2022) as they were recognized based on the distortions and unnaturalness in our study. The themes of unnaturalness have been found to lower the deepfake user perception; our results support and expand the results of previous research (Ahmed et al., 2023;Bray et al., 2022;Cleveland, 2022;Groh et al., 2022;Müller et al., 2021;Ng, 2022;Preu et al., 2022;Shahid et al., 2022) by offering quantitative and qualitative insights. ...
... The erosion of trust in democratic societies contributes to an atmosphere of uncertainty and cynicism, which presents substantial challenges to the preservation of a robust online civic culture. The study "The Impact of Fear of Missing Out (FOMO) and Inadequate Self-Regulation on Deepfake Sharing" by Ahmed, Ng, and Bee (2023) investigates the social-psychological elements that contribute to the dissemination of deepfakes on social media. It is worth mentioning that individuals who possess inferior cognitive abilities are particularly vulnerable to these influences, which further propagate deepfakes. ...
... As a result, feeling overburdened and having an emotional reaction might lead them to promote content on social media effortfully to gain their social network's approval or support. For the media research from eight countries, which are China, Indonesia, Malaysia, the Philippines, Singapore, Thailand, Vietnam, and the US, Ahmed et al. (2023) noted that deepfake sharing is more common among young people with depression. Thus, the connection between depression and social media use is an intriguing topic among researchers. ...
Article
Full-text available
Purpose – This paper investigates how depression and social media use are related among the young, encompassing how utilization of social media affects depression, how depression impacts social media use, and the correlation between social media use and depression. Methodology – Applying the literature review method, Scopus, Web of Science, Emerald, Science Direct, JSTOR, Wiley Online Library, SpringerLink, Taylor and Francis Online were selected as search databases and identified 17 papers that fulfilled the authors’ requirements. Findings – First, social media use and depression can be impacted by geographical locations between the East and the West. Second, the amount of time, the quantity, and the behavior of the young spend on social media significantly impact their well-being. Third, the correlation between depression and social media use is influenced by ideological representation, besides FOMO. The association between social media use and depression might also be significantly mediated by personality traits. Value – The association between social media use and depression was determined in the present investigation. First, the literature review addressing the connection between social media use and depression among the young was assembled. Second, possible explanations for the discrepancies in the results were presented. Third, potential relationships between social media use and depression were clarified.
... Similarly, previous research has found that deepfakes are most likely recognized by the users [21] as they were recognized based on the distortions and unnaturalness, in our study. These themes have been found to lower the deepfake user perception in the past; our results support and expand the results of previous research [4,9,12,21,43,46,52,60] by offering qualitative insights. ...
Conference Paper
Full-text available
Although deepfakes have a negative connotation in human-computer interaction (HCI) due to their risks, they also involve many opportunities, such as communicating user needs in the form of a “living, talking” deepfake persona. To scope and better understand these opportunities, we present a qualitative analysis of 46 participants’ think-aloud transcripts based on interacting with deepfake personas and human personas, representing a potentially beneficial application of deepfakes for HCI. Our qualitative analysis of 92 think-aloud records indicates five central user deepfake themes, including (1) Realism, (2) User Needs, (3) Distracting Properties, (4) Added Value, and (5) Rapport. The results indicate various challenges in deepfake user perception that technology developers need to address before the potential of deepfake applications can be realized for HCI.
Article
Full-text available
The study explores the relationship between digital literacy, exposure to AI-generated deepfake videos, and the ability to identify deepfakes by Generation X in Indonesia who are currently between the ages of 43 and 58. It also analyzes the impact of deepfake identification capabilities on the cognitive, affective, and behavioral aspects of internet users. Through a survey involving 199 respondents taken from a total population of 42 million Generation X internet users in Indonesia, it applied a random sampling method. The sample size was determined by the Slovin formula with a confidence level of 90% and a margin of error of 7.1%. The descriptive analysis shows a moderate level of digital literacy and relatively low exposure to deepfakes. However, the ability to identify deepfakes was found to be low. The results of inferential statistical analysis show that digital literacy and exposure to deepfakes do not have a significant influence on the ability to identify deepfakes. Additionally, the ability to identify deepfakes does not significantly affect cognition, compassion, or behavior. While digital literacy is important, these findings reinforce the assumptions of Generation Theory and Media Dependency Theory. Additionally, it suggests that specific training on media manipulation technologies is needed to improve deepfake detection capabilities. This research implies that efforts to improve digital literacy should be expanded, including technical skills and critical thinking relevant to manipulative media such as deepfakes.
Article
Full-text available
Why do we share fake news? Despite a growing body of freely-available knowledge and information fake news has managed to spread more widely and deeply than before. This paper seeks to understand why this is the case. More specifically, using an experimental setting we aim to quantify the effect of veracity and perception on reaction likelihood. To examine the nature of this relationship, we set up an experiment that mimics the mechanics of Twitter, allowing us to observe the user perception, their reaction in the face of shown claims and the factual veracity of those claims. We find that perceived veracity significantly predicts how likely a user is to react, with higher perceived veracity leading to higher reaction rates. Additionally, we confirm that fake news is inherently more likely to be shared than other types of news. Lastly, we identify an activist-type behavior, meaning that belief in fake news is associated with significantly disproportionate spreading (compared to belief in true news).
Article
Full-text available
This study investigates the antecedents of advertent (intentional) deepfakes sharing behavior. Data from two countries (US and Singapore) reveal that social media news use and FOMO are positively associated with intentional deep fakes sharing. Those with lower cognitive ability exhibit higher levels of FOMO and increased sharing behavior. FOMO also has a positive mediation effect on the association among citizens’ news use and sharing of deep fakes. Moderated mediation suggests that the indirect effects of social media news use on advertent sharing through FOMO are more substantial for low than high cognitive individuals. Theoretical implications of the results are discussed.
Article
Sharing of misinformation on social media platforms is a global concern, with research offering little insight into the motives behind such sharing. Drawing from the cognitive load theory and literature on cognitive ability, we developed and tested a research model hypothesising why people share misinformation. We also tested the moderating role of cognitive ability. We obtained data from 385 social media users in Nigeria using a chain referral technique with an online questionnaire as the instrument for data collection. Our findings suggest that information overload and social media fatigue are strong predictors of misinformation sharing. Information stress also contributed to misinformation sharing behaviour. Furthermore, cognitive ability moderated and weakened the effect information strain and information overload have on misinformation sharing in such a way that this effect is more pronounced among those with low cognitive ability. This indicates that those with low cognitive ability have a higher tendency to share misinformation. However, cognitive ability had no effect on the effect social media fatigue has on misinformation sharing behaviour. The study concluded with some theoretical and practical implications.
Article
The classical account of reasoning posits that analytic thinking weakens belief in COVID-19 misinformation. We tested this account in a demographically representative sample of 742 Australians. Participants completed a performance-based measure of analytic thinking (the Cognitive Reflection Test) and were randomized to groups in which they either rated the perceived accuracy of claims about COVID-19 or indicated whether they would be willing to share these claims. Half of these claims were previously debunked misinformation, and half were statements endorsed by public health agencies. We found that participants with higher analytic thinking levels were less likely to rate COVID-19 misinformation as accurate and were less likely to be willing to share COVID-19 misinformation. These results support the classical account of reasoning for the topic of COVID-19 misinformation and extend it to the Australian context.
Article
This study investigates the underlying motives for online fake news sharing during the COVID-19 pandemic, an unprecedented time that witnessed a spike in the spread of false content. Motives were identified based on a fake news sharing model developed using the SocioCultural-Psychological-Technology (SCulPT) model, Uses and Gratification (U&G) theory and Self-Determination Theory (SDT), and further extended using fake news predictors/gratifications from past studies. A self-administered survey resulted in 869 online Malaysian respondents aged between 18 and 59 years old (Mean = 22.6, Standard deviation = 6.13). Structured equation modelling revealed the fake news sharing model to collectively account for 49.2 % of the variance, with Altruism (β = 0.333; p < 0.001), Ignorance (β = 0.165; p < 0.001) and Entertainment (β = 0.139; p < 0.001) significantly predicting the behaviour. Conversely, Availability/Effort, Pass Time and Fear of Missing Out were found to be insignificant. Our findings indicate that fake news sharing behavior is determined by different motives, hence these need to be understood in order to develop better solutions to mitigate this problem.
Article
We examine how individual differences influence perceived accuracy of deepfake claims and sharing intention. Rather than political deepfakes, we use a non-political deepfake of a social media influencer as the stimulus, with an educational and a deceptive condition. We find that individuals are more likely to perceive the deepfake claim to be true when informative cues are missing along with the deepfake (compared to when they are present). Also, individuals are more likely to share deepfakes when they consider the fabricated claim to be accurate. Moreover, we find that cognitive ability plays a moderating role such that when informative cues are present (educational condition), individuals with high cognitive ability are less trustful of deepfake claims. Unexpectedly, when the informative cues are missing (deceptive condition), these individuals are more likely to consider the claim to be true and share them. The findings suggest that adding corrective labels can help reduce inadvertent sharing of disinformation. Also, user biases should be considered in understanding public engagement with disinformation.
Chapter
A critical challenge that has faced society during the COVID-19 pandemic is successfully convincing the public to adopt and comply with a range of health-protective behaviors including wearing a mask and frequent handwashing as well as compliance with imposed lockdown curfews and quarantine orders. Some individuals have done so successfully while others have outrightly resisted. This chapter reviews three primary constructs for which a burgeoning literature base has emerged, and which appear to be associated with compliance with health-protective behaviors in the context of COVID-19. First, the tendency to believe and propagate COVID-19 misinformation, primarily over social media. Two principal factors have been implicated in explaining some individuals’ tendency to share misinformation—evidence relating to deficient self-regulation and information overload is reviewed with the net result being decreased compliance. Second, perceived risk, the extent to which individuals view themselves to be vulnerable to infection coupled with beliefs relating to their perceived self-efficacy to successfully implement the recommended behaviors appears particularly important. Third, personality traits also appear to exert a degree of predictive control over compliance, albeit smaller in comparison to the preceding factors. Most notably individuals who are agreeable tend to comply while those with Dark Triad traits (Machiavellianism, psychopathy, and narcissism) are more likely to engage in maladaptive behaviors (e.g., stockpiling). These results can be used to inform the design and dissemination of public health information during the pandemic; specifically, campaigns should include information that is clear, consistent, and understandable and emphasizes the effectiveness of the recommended behaviors so as to reduce uncertainty and increase public self-efficacy.
Article
A 61 question survey was used to examine issues around "deepfake" technology. In total, 319 respondents answered questions around awareness, concerns, and the responsibility of online platforms around deepfakes. Awareness of deepfakes varies by intensity and type of social media use. Concerns about deepfakes are pronounced, but not uniform. A regression model examines the factors impacting the perceived responsibility of online platforms to regulate deepfakes. General concerns and the impacts people believe deepfakes will make are significant. However, the more humorous aspects of deepfakes and a perception of individual responsibility negatively impact the perceived need for platforms to address the risks of deepfakes. There is little confidence in the ability of technology to solve the problem of deepfakes, but this does not reduce the desire for online platforms to implement a deepfake identification technology. This research has implications for users of social media, social media platforms, technology developers, and broader society.
Article
This research interrogates the discourses that frame our understanding of deepfakes and how they are situated in everyday public conversation. It does so through a qualitative analysis of popular news and magazine outlets. This project analyzes themes in discourse that range from individual threat to societal collapse. This article argues how the deepfake problem discursively framed impacts the solutions proposed for stemming the prevalence of deepfake videos online. That is, if fake videos are framed as a technical problem, solutions will likely involve new systems and tools. If fake videos are framed as a social, cultural, or as an ethical problem, solutions needed will be legal or behavioral ones. As a conclusion, this article suggests that a singular solution is inadequate because of the highly interrelated technical, social, and cultural worlds, in which we live today.
Article
The continued influence effect refers to the finding that people often continue to rely on misinformation in their reasoning even if the information has been retracted. The present study aimed to investigate the extent to which the effectiveness of a retraction is determined by its credibility. In particular, we aimed to scrutinize previous findings suggesting that perceived trustworthiness but not perceived expertise of the retraction source determines a retraction’s effectiveness, and that continued influence arises only if a retraction is not believed. In two experiments, we found that source trustworthiness but not source expertise indeed influences retraction effectiveness, with retractions from low-trustworthiness sources entirely ineffective. We also found that retraction belief is indeed a predictor of continued reliance on misinformation, but that substantial continued influence effects can still occur with retractions designed to be and rated as highly credible.