Access to this full-text is provided by Frontiers.
Content available from Frontiers in Psychology
This content is subject to copyright.
Frontiers in Psychology 01 frontiersin.org
Understanding the role of fear of
missing out and deficient
self-regulation in sharing of
deepfakes on social media:
Evidence from eight countries
SaifuddinAhmed
1
*, SherylWeiTingNg
2 and
Adeline Wei TingBee
1
1 Wee Kim Wee School for Communication and Information, Nanyang Technological University,
Singapore, Singapore, 2 Department of Communications and New Media, National University of
Singapore, Singapore, Singapore
Deepfakes are a troubling form of disinformation that has been drawing increasing
attention. Yet, there remains a lack of psychological explanations for deepfake
sharing behavior and an absence of research knowledge in non-Western contexts
where public knowledge of deepfakes is limited. Weconduct a cross-national
survey study in eight countries to examine the role of fear of missing out (FOMO),
deficient self-regulation (DSR), and cognitive ability in deepfake sharing behavior.
Results are drawn from a comparative survey in seven South Asian contexts (China,
Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam) and compare
these findings to the UnitedStates, where discussions about deepfakes have been
most relevant. Overall, the results suggest that those who perceive the deepfakes
to beaccurate are more likely to share them on social media. Furthermore, in all
countries, sharing is also driven by the social-psychological trait – FOMO. DSR
of social media use was also found to bea critical factor in explaining deepfake
sharing. It is also observed that individuals with low cognitive ability are more
likely to share deepfakes. However, wealso find that the eects of DSR on social
media and FOMO are not contingent upon users’ cognitive ability. The results of
this study contribute to strategies to limit deepfakes propagation on social media.
KEYWORDS
deepfakes, disinformation, FOMO, self-regulation, cognitive ability, sharing, self control,
Asia
1. Introduction
Experts have recently warned against the dangers of deepfakes, a form of disinformation
created by articial intelligence. Specically, deepfakes are highly realistic but synthetically
generated video or audio representations of individuals created using articial intelligence
(Westerlund, 2019). ey are oen more striking, persuasive, and deceptive compared to text-
based disinformation (Hameleers etal., 2020). erefore, the potential dangers of deepfakes have
drawn signicant academic attention, with several scholars studying public engagement with
deepfakes and their consequences (Brooks, 2021; Ahmed, 2021a).
However, there is little evidence on why users share deepfakes on social media. Some studies
suggest that users with high political interests and those with low cognitive ability are more likely
OPEN ACCESS
EDITED BY
David J. Robertson,
University of Strathclyde,
UnitedKingdom
REVIEWED BY
Antonio Aquino,
University of Studies G. d'Annunzio
Chieti and Pescara,
Italy
Hong Cai,
University of Macau,
China
*CORRESPONDENCE
Saifuddin Ahmed
sahmed@ntu.edu.sg
SPECIALTY SECTION
This article was submitted to
Cognition,
a section of the journal
Frontiers in Psychology
RECEIVED 19 December 2022
ACCEPTED 02 February 2023
PUBLISHED 07 March 2023
CITATION
Ahmed S, Ng SWT and Bee AWT (2023)
Understanding the role of fear of missing out
and deficient self-regulation in sharing of
deepfakes on social media: Evidence from
eight countries.
Front. Psychol. 14:1127507.
doi: 10.3389/fpsyg.2023.1127507
COPYRIGHT
© 2023 Ahmed, Ng and Bee. This is an open-
access article distributed under the terms of
the Creative Commons Attribution License
(CC BY). The use, distribution or reproduction
in other forums is permitted, provided the
original author(s) and the copyright owner(s)
are credited and that the original publication in
this journal is cited, in accordance with
accepted academic practice. No use,
distribution or reproduction is permitted which
does not comply with these terms.
TYPE Brief Research Report
PUBLISHED 07 March 2023
DOI 10.3389/fpsyg.2023.1127507
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 02 frontiersin.org
to share deepfakes (Ahmed, 2021b). Still, there is a lack of
psychological explanations for this behavior. For instance, while some
studies have explored the role of cognitive ability in reducing the
spread of general misinformation (Apuke etal., 2022), there is a lack
of research on its eect on deepfake sharing behavior.
Moreover, most current research on deepfakes is based in Western
democratic contexts where the general awareness about deepfakes
may behigher because they have been featured more heavily in the
public discourse, like in the UnitedStates. However, it is unclear how
these ndings would apply in non-Western contexts where public
knowledge of deepfakes is limited.
To gain a more complete understanding of why users share
deepfakes on social media, it is necessary to examine the role of
psychological traits and cognitive ability in deepfake sharing behavior
across multiple contexts. More precisely, we focus on a set of
psychological (e.g., fear of missing out) and cognitive factors (e.g.,
cognitive ability) that can help explain deepfake sharing on social
media. Further, this study presents the results of a cross-national
comparative survey on deepfake sharing behavior in seven South
Asian contexts (China, Indonesia, Malaysia, Philippines, Singapore,
ailand, and Vietnam) and compares these ndings to the
UnitedStates, where discussions about deepfakes have been most
relevant. e cross-national design of the study increases the
generalizability of the ndings. e specic goals of the study are
outlined below.
In the rst step, weexamine the role of fear of missing out
(FOMO) in deepfake sharing behavior. FOMO is the psychological
anxiety that one might bele out of exciting exchanges in their
social circles (Przybylski etal., 2013). Some scholars found that
FOMO is positively associated with sharing fake news online (Talw ar
etal., 2019), while others have found it insignicant in predicting
fake news sharing (Balakrishnan et al., 2021). ese conicting
results highlight the need to clarify the inuence of FOMO on
misinformation sharing. Nevertheless, regarding deepfakes
specically, FOMO has been found to positively predict intentional
deepfake sharing (Ahmed, 2022). However, Ahmed (2022) focused
on intentional sharing behavior in two technologically advanced
contexts: UnitedStates and Singapore. It is unclear whether such
ndings would bereplicated in the less technologically advanced
countries westudy in this paper. However, if participants in these
countries emulate the same processes as social media users in the
UnitedStates and Singapore, wewould expect levels of FOMO to
be associated with deepfake sharing behavior. Given the role of
FOMO in existing literature, we hypothesize that FOMO will
bepositively associated with the sharing of deepfakes (H1).
Next, this study evaluates the eect of self-regulation on deepfake
sharing behavior. Self-regulation is “the process of self-control through
the subfunctions of self-monitoring, judgmental process, and self-
reaction” (LaRose etal., 2003, p.232). With sucient self-regulation,
individuals can modulate their behaviors through self-observation.
Conversely, decient self-regulation (DSR), when conscious self-
control is weakened (LaRose etal., 2003), manifests in behavioral
addictions, such as an addiction to the internet, through a lack of
control over impulses (Vally, 2021). Prior studies have reported that
DSR signicantly predicted unveried information sharing (Islam
et al., 2020). DSR is also associated with social media fatigue
(Islam et al., 2020; Vally, 2021). In turn, social media fatigue is
positively associated with sharing fake news online (Talwar et al.,
2019). Hence, wehypothesize that DSR will bepositively associated
with the sharing of deepfakes (H2).
It is also essential to investigate the role of cognitive ability in the
sharing of deepfakes as it can provide insight into how individuals
make decisions about sharing potentially harmful content. Cognitive
ability, which refers to an individual’s mental capacity for problem-
solving, decision-making, and learning, can inuence how individuals
process information and make decisions. An aspect of cognitive ability
that is vital to engagement with deepfakes is “the ability or the
motivation to think analytically” (Ahmed, 2021b, p.3). Prior research
on the association of cognitive ability with deepfake perception has
reported that individuals with higher cognitive ability are less likely to
engage in deepfake sharing because they have better discernment and
decision-making abilities (Ahmed, 2021b). is may be because
individuals with high cognitive ability are known to make sound
judgments and are inclined toward problem-solving (Apuke etal.,
2022). As such, individuals might perform better at tasks that require
reasoning and assessment—such as discerning falsehoods from the
truth (Nurse etal., 2021). Although high cognitive ability does not
suggest that an individual is infallible to misinformation (Apuke etal.,
2022), cognitive ability appears to confer some advantages for
navigating misinformation. Given the robustness of cognitive ability
in safeguarding users in misinformation engagement, wehypothesize
that cognitive ability will benegatively associated with the sharing of
deepfakes (H3).
Finally, given that cognitive ability can inuence deepfake sharing,
wealso explore whether the eects of FOMO and DSR are contingent
upon individuals’ cognitive ability. Weanticipate that cognitive ability
might act as a buer against individual traits such as FOMO and DSR
in deepfake sharing. erefore, wepose a research question: how
would cognitive ability moderate the association between (a) FOMO
and (b) DSR and the sharing of deepfakes (RQ1)?
Investigating the moderating eect of cognitive ability on the
relationship between FOMO, DSR, and deepfake sharing can provide
insight into how individuals make decisions about sharing potentially
harmful content. Such a study can also inform interventions and
strategies to reduce the spread of harmful deepfake content. erefore,
wereport a cross-national comparative study that uses online panel
survey data from eight countries to test the relationships between
FOMO, DSR, cognitive ability, and deepfake sharing. Wealso tested
for the moderating role of cognitive ability in the relationships
mentioned above. Overall, this study contributes to the growing
literature on user engagement with disinformation and will help us
understand the critical psychological underpinnings of
deepfake sharing.
2. Materials and methods
2.1. Participants
We contracted Qualtrics LLC, a survey research rm agency, to
conduct surveys in eight countries, including the UnitedStates, China,
Singapore, Indonesia, Malaysia, Philippines, ailand, and Vietnam.
Weused a quota sampling approach to match the sample to population
parameters focusing on age and gender quotas. is was done to
generalize our ndings to the national adult population. e surveys
were conducted concurrently in June 2022 and were translated into
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 03 frontiersin.org
national languages. Wegathered 1,008 participants (on average) per
country – UnitedStates (N= 1,010), China (N = 1,010), Singapore
(N= 1,008), Indonesia (N= 1,010), Malaysia (N= 1,002), Philippines
(N= 1,010), ailand (N= 1,010), and Vietnam (N= 1,010). e study
was approved by the institutional review board at Nanyang
Technological University.
2.2. Measurements
Deepfake sharing was measured using four deepfake video stimuli
(see Supplementary Table A1). e deepfakes were chosen based on
their virality on social media. ey included a mix of political (e.g.,
Mark Zuckerberg and Vladimir Putin) and entertainment deepfakes
(e.g., Tom Cruise and Kim Kardashian). is approach enhances the
external validity of our design. Moreover, other studies have also used
some of these deepfakes (see Cochran and Napshin, 2021;
Ahmed, 2021a).
For each deepfake video, weensured that participants were able
to play it in full-screen mode. Wealso asked participants if they could
play the video le and successfully watched the video. Wethen asked
how likely they were to share that video on social media with others.
Participants responded on a 5-point scale ranging from 1 = extremely
to 5 = not at all. Wethen reverse-coded and averaged their responses
across the four stimuli. A higher score represents greater sharing.
Perceived accuracy was measured using the same four deepfake
stimuli. Since perceived accuracy has been found to closely relate to
sharing disinformation (Ahmed, 2021a; t'Serstevens et al., 2022),
wehave included this as a critical covariate in our analyses. For each
deepfake, weasked participants how accurate was the central claim
presented in the video (e.g., for the Mark Zuckerberg deepfake
we asked, “how accurate is the claim that Mark Zuckerberg said
whoever controls the data, controls the future”). Participants
responded on a 5-point scale ranging from 1 = not at all accurate, to
5 = extremely accurate. e response for the perceived accuracy of four
deepfakes was averaged to create a scale of perceived accuracy.
Self-regulation (decient) was measured using a ve-item scale
adapted from LaRose and Eastin (2004). Items included “I sometimes
try to hide how much time Ispend on social media from my family or
friends,” and “I feel my social media use is out of control.” among
others. Participants responded on a 5-point scale ranging from
1 = strongly disagree to 5 = strongly agree. e items were averaged to
create an index of DSR.
Fear of missing out was measured using a previously validated
10-item scale (Przybylski etal., 2013). Example items include “I get
worried when Ind out my friends are having fun without me,” “It
bothers me when Imiss an opportunity to meet up with friends.”
Participants responded on a 5-point scale ranging from 1 = not at all
true of me and 5 = extremely true of me. e items were averaged to
create an index of FOMO.
Cognitive ability was measured by the wordsum test. It includes 10
vocabulary words in which participants are required to nd the closest
synonym from ve options. is test is well-established and has been
used widely in literature, including deepfake and fake news studies
(orndike, 1942; Wechsler, 1958; Ganzach et al., 2019;
Ahmed, 2021b).
e descriptive for the variables above can befound in Table1. All
variables met satisfactory reliability in each context (except for
cognitive ability in Indonesia, discussed below). See
Supplementary Table B1 for details.
2.3. Covariates
is study includes several covariates that may inuence deepfake
sharing behavior. ese include demographic variables: (a) age, (b)
gender, (c) education, (d) income, (e) social media news consumption,
(f) TV news consumption, (g) radio news consumption, and (h) print
news consumption (see Table1).
2.4. Analysis
We ran hierarchical regression analyses for the eight countries
using SPSS and explored the moderation eects using Hayes (2018)
PROCESS macro for SPSS. Wealso conducted reliability analyses for
the key variables and discovered that cognitive ability had low
reliability in Indonesia. Wethus excluded cognitive ability from our
analysis for Indonesia.
Other than running separate regression models for each country,
wealso ran a pooled regression model. e results are in line with
what is presented in the study (see Supplementary Table C1
for details).
3. Results
e results of the regression analysis are presented in Table2. In
our preliminary analyses, wefound that age was negatively associated
with sharing in all countries – UnitedStates (β = −0.32, p< 0.001),
China (β = −0.14, p < 0.001), Singapore (β = −0.26, p < 0.001),
Indonesia (β = −0.18, p< 0.001), Malaysia (β = −0.21, p < 0.001),
Philippines (β = −0.23, p< 0.001), and ailand (β = −0.29, p< 0.001),
but not Vietnam (β = 0.02, p= 0.44). erefore, suggesting that older
adults tend to share less.
We also found a sex eect where males (males = 0, females = 1in
the regression models) were more likely to share in most contexts. e
results were signicant for the UnitedStates (β = −0.08, p< 0.001),
Singapore (β = −0.10, p< 0.001), Indonesia (β = −0.10, p< 0.001),
Malaysia (β = −0.11, p< 0.001), Philippines (β = −0.13, p< 0.001),
ailand (β = −0.07, p< 0.05), and Vietnam (β = −0.12, p< 0.001). e
only exception being China (β = −0.05, p= 0.11).
Social media news consumption was also a signicant predictor
of sharing behavior in most countries. Individuals who consumed
more social media news were more likely to share in the UnitedStates
(β = 0.19, p< 0.001), China (β = 0.14, p< 0.001), Singapore (β = 0.13,
p < 0.001), Indonesia (β = 0.12, p< 0.001), Philippines (β = 0.16,
p< 0.001), and Vietnam (β = 0.16, p< 0.001). e results for Malaysia
(β = 0.04, p = 0.18) and ailand (β = −0.01, p = 0.76) were
statistically insignicant.
We also found that the perceived accuracy of deepfake was a
strong and positive predictor in all countries, UnitedStates (β = 0.28,
p< 0.001), China (β = 0.32, p< 0.001), Singapore (β = 0.31, p< 0.001),
Indonesia (β = 0.28, p < 0.001), Malaysia (β = 0.26, p < 0.001),
Philippines (β = 0.18, p< 0.001), ailand (β = 0.30, p< 0.001), and
Vietnam (β = 0.30, p< 0.001). In essence, those who thought the
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 04 frontiersin.org
deepfakes were true were more likely to have sharing intentions. ese
ndings are consistent with prior disinformation research
(Ahmed, 2021a).
Next, wefound strong support for H1. FOMO was positively
associated with sharing. is was true for all countries – UnitedStates
(β = 0.25, p< 0.001), China (β = 0.14, p< 0.001), Singapore (β = 0.27,
p < 0.001), Indonesia (β = 0.18, p < 0.001), Malaysia (β = 0.33,
p < 0.001), Philippines (β = 0.22, p< 0.001), ailand (β = 0.28,
p< 0.001), and Vietnam (β = 0.31, p< 0.001).
Next, wefound in ve of eight countries, namely, the UnitedStates
(β = 0.07, p< 0.05), Singapore (β = 0.16, p< 0.001), Indonesia (β =
0.08, p< 0.01), Malaysia (β = 0.07, p< 0.05), and ailand (β = 0.06,
p< 0.05), DSR was positively associated with the sharing of deepfakes.
In other words, those who were more impulsive were more likely to
share deepfakes. us, H2 is supported.
We also found support for H3. ose with higher cognitive ability
were less likely to share deepfake. is was true for UnitedStates
(β = −0.10, p< 0.001), Singapore (β = −0.14, p < 0.001), Malaysia
(β = −0.11, p< 0.001), Philippines (β = −0.10, p < 0.001), ailand
β = −0.05, p< 0.001), and Vietnam (β = −0.16, p< 0.001).
As for RQ1, we did not nd evidence that cognitive ability
moderated the association between (a) FOMO and (b) DSR and
deepfake sharing. e only exception was Singapore (β = −0.33,
p< 0.001); cognitive ability diminished the eects of DSR. In other
words, those who were most decient in self-regulation and had the
least cognitive ability were most likely to share deepfakes (Table2).
However, welargely witness that the impact of DSR and FOMO on
sharing behavior is not contingent upon the cognitive ability
of individuals.
4. Discussion
Most studies have investigated social media sharing for general
mis- and disinformation. is study is a rare attempt at analyzing
sharing associated with deepfakes in eight countries. Overall, the
results suggest that those who perceive the deepfakes to beaccurate
are more likely to share them on social media. Furthermore, in all
countries, sharing is also driven by the social-psychological trait –
FOMO. DSR of social media use was also found to bea critical factor
in explaining sharing of deepfakes. ough, FOMO is a more
consistent predictor than DSR. It is also observed that individuals with
low cognitive ability are more likely to engage in deepfake sharing.
However, we also nd that the eects of DSR on social media and
FOMO are not contingent upon users’ cognitive ability. In sum, the
study identies critical factors associated with the sharing of deepfakes
on social media. e ndings are discussed in detail below.
First, the study provides empirical support to the oen-discussed
relationship between perceived accuracy and sharing of
disinformation. In the wider literature, many have questioned the
eectiveness of agged corrections. is is because of the continued
inuence eect in which people continue to act on their misinformed
beliefs even aer it has been debunked (Lewandowsky etal., 2012;
Ecker and Antonio, 2021). Because the continued inuence eect is
resilient across situations and people, researchers have generally taken
a modest stance towards correcting mis- or disinformation. In view of
this debate, the results of this study suggest that targeting accuracy
perceptions of deepfakes on social media may still help curtail their
propagation. is is in line with a recent study by Ecker and Antonio
(2021), which outlined certain conditions for fake news retraction
ecacy. ey argue that retractions from highly trustworthy and
authoritative sources can mitigate the continued inuence eect.
Similarly, when people are given persuasive reasons to correct their
beliefs, they may beless likely to persist in the misbelief and less likely
to share the deepfake. ough, given the dierence in nature of
disinformation (deepfakes vs. other forms), this remains an area
worth investigating.
Second, wend strong support for FOMO to beassociated with
the sharing of deepfakes. Individuals with high levels of FOMO are
found to besensitive and susceptible to distress due to neglect by
their social media peers (Beyens etal., 2016). It is possible that such
individuals may share deepfakes to gain an opportunity to receive
social acceptance and avoid peer neglect on social media. is is in
line with Talwar et al.’s (2019) explanation using the
TABLE1 Mean, standard deviation of all variables under study.
United
States
China Singapore Indonesia Malaysia Philippines Thailand Vietnam
Male 46.00% 51.50% 54.90% 50.20% 49.20% 51.30% 52.90% 53.20%
Mean (SD) Mean SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean SD) Mean (SD)
Age 49.0 (17.5) 38.0 (12.7) 43.8 (14.0) 40.3 (13.0) 39.9 (13.7) 37.2 (13.6) 41.6 (13.0) 38.4 (12.8)
Education 5.22 (1.21) 5.72 (0.998) 5.32 (1.22) 5.27 (1.06) 5.77 (1.61) 5.38 (1.11) 5.34 (1.17) 5.55 (0.943)
Income 5.79 (3.76) 6.00 (2.27) 5.47 (2.64) 2.81 (2.39) 4.14 (2.51) 3.26 (2.02) 4.09 (1.91) 6.52 (2.03)
SM news 2.90 (1.49) 3.27 (1.22) 3.17 (1.23) 3.49 (1.21) 3.76 (1.21) 4.17 (0.97) 4.21 (1.03) 3.67 (1.07)
TV news 3.35 (1.35) 3.26 (1.17) 3.11 (1.22) 3.13 (1.24) 3.31 (1.23) 3.94 (1.08) 3.68 (1.22) 3.28 (1.09)
Radio news 2.40 (1.28) 2.46 (1.16) 2.44 (1.22) 2.07 (1.09) 2.64 (1.13) 3.07 (1.22) 2.43 (1.19) 2.32 (1.11)
Print news 2.20 (1.29) 2.12 (1.03) 2.51 (1.31) 2.11 (1.15) 2.63 (1.24) 2.59 (1.23) 2.37 (1.11) 2.33 (1.03)
Perceived accuracy 2.86 (1.04) 3.24 (0.76) 2.41 (0.92) 2.94 (0.8) 2.90 (0.89) 2.81 (0.81) 2.88 (0.87) 2.62 (0.88)
DSR 2.27 (1.16) 2.62 (0.95) 2.43 (1.04) 2.73 (0.88) 2.68 (0.9) 2.61 (0.97) 2.93 (0.90) 2.67 (0.97)
FOMO 2.29 (1.06) 2.53 (0.84) 2.13 (0.95) 2.21 (0.82) 2.22 (0.88) 2.33 (0.88) 2.51 (0.89) 2.44 (0.94)
Cognitive ability 5.43 (2.41) 8.00 (2.09) 5.53 (2.30) 4.78 (1.19) 4.47 (1.79) 6.56 (2.14) 5.81 (1.67) 5.23 (1.86)
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 05 frontiersin.org
self-determination theory. In general, the self-determination theory
(SDT) postulates that humans have an innate desire to grow in
understanding and make meaning of itself. SDT also posits that
people tend to rely on social support. In other words, humans nd it
dicult to understand themselves without being in relation with
others. erefore, when their sense of relatedness is compromised,
they may try to reestablish it by sharing exciting content within their
social circle. Moreover, deepfakes are oen intriguing, amusing, and
provoking and may add to the value of their sharing. Previous
evidence also conrms that negative and novel information oen
spreads more rapidly (Vosoughi et al., 2018) – a characteristic of
most deepfakes.
ird, DSR of social media use was positively associated with the
sharing of deepfakes in a majority of contexts. e relationship
between DSR and sharing can beexplained through the fact that when
individuals suer from DSR, they oen engage in behaviors that they
would not perform if they were self-aware and able to employ a certain
level of self-control (LaRose et al., 2003). Here, they may bemore
likely to share deepfakes. Further, individuals with high DSR may have
diculty managing their emotional reactions to deepfakes. As such,
they may feel overwhelmed, and their emotional response could lead
them to share on social media, in an attempt to seek support or
validation from their social network.
Fourth, individuals with low cognitive ability are vulnerable to
deepfake sharing. ese results are consistent with existing literature
and provide support for the generalizability of the relationships across
countries. Individuals with low cognitive ability may have diculty
understanding complex information and applying critical skills in
analyzing the authenticity of deepfakes. As such, they may bemore
susceptible to sharing. Moreover, those with low cognitive ability may
also struggle to understand the potential consequences of
spreading disinformation.
Finally, while weobserve the direct eect of cognitive ability on
sharing, wedo not observe any moderation eects. In general, the results
highlight that individuals with high FOMO and DSR may share, even if
they have high cognitive ability and are capable of critically evaluating
information, thereby lacking restraint. ese patterns conrm that
certain social-psychological traits and problematic social media use may
TABLE2 Hierarchical regression analysis predicting deepfakes sharing.
United
States
China Singapore Indonesia Malaysia Philippines Thailand Vietnam
β β β β β β β β
Age −0.32*** −0.14*** −0.26*** −0.18*** −0.21*** −0.23*** −0.29*** −0.02
Male −0.08*** −0.05 −0.10*** −0.10*** −0.11*** −0.13*** −0.07*−0.12***
Education −0.08** 0.04 −0.04 0.06*−0.07*−0.04 −0.03 −0.05
Income 0.05*0.11** −0.01 0.07*−0.02 0.02 0.05 0.00
SM news 0.19*** 0.14*** 0.13*** 0.12*** 0.04 0.16*** −0.01 0.16***
TV news 0.02 0.07 −0.02 0.09*−0.04 0.06 0.05 −0.08*
Radio news 0.21*** 0.13** 0.19*** 0.17*** 0.18*** 0.08 0.15*** 0.17***
Print news 0.17*** 0.09*0.11** 0.15*** 0.12** 0.12** 0.17*** 0.15***
∆R20.42*** 0.16*** 0.16*** 0.23*** 0.12*** 0.15*** 0.19*** 0.12***
Step2:
Variables of
interest
Per Accuracy 0.28*** 0.32*** 0.31*** 0.28*** 0.26*** 0.18*** 0.30*** 0.30***
Def self-reg
(DSR)
0.07*−0.01 0.16*** 0.08** 0.07*0.05 0.06*−0.00
FOMO 0.25*** 0.14*** 0.27*** 0.18*** 0.33*** 0.22*** 0.28*** 0.31***
Cog ability
(CA)
−0.10*** −0.05 −0.14*** –−0.11*** −0.10** −0.05*−0.16***
∆R20.17*** 0.12*** 0.32*** 0.13*** 0.23*** 0.11*** 0.22*** 0.26***
Step3:
Moderation
eects
DSR x CA 0.10 −0.07 −0.33*** – 0.10 −0.07 0.01 −0.07
FOMO x CA −0.14 0.26 0.12 – 0.02 0.01 0.19 0.03
∆R20.002 0.003 0.008*** – 0.001 0.001 0.002 0.001
Tot al ∆R20.59 0.28 0.49 0.36 0.35 0.26 0.42 0.38
1. ***p < 0.001; **p < 0.01; *p < 0.05. 2. males = 0, females = 1; 3. cognitive ability was excluded from Indonesia due to low reliability.
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 06 frontiersin.org
override critical thinking. However, future studies could consider using
the need for cognition (Cacioppo and Petty, 1982) rather than cognitive
ability as a moderator. A higher need for cognition can direct individuals
to more cognitive eort toward complex cognitive processes. Further,
previous evidence conrms that the successful implementation of self-
control requires the availability of limited resources (Schmeichel and
Baumeister, 2004). Individuals with a high need for cognition are also
found to exhibit greater self-control (Bertrams and Dickhäuser, 2009).
erefore, it may beworthwhile to examine the inuence of the need for
cognition on not only DSR but also the sharing of deepfakes.
Overall, the factors discussed in this study can help explain why
certain individuals contribute to deepfake propagation on social media.
To prevent the spread of deepfakes, it is essential to promote healthy self-
regulation of social media use and to provide individuals with the tools
and skills necessary to manage their excessive use of social media. In
addition, promoting interventions that develop the critical skills of
individuals with low cognitive ability is also essential. is would
safeguard certain groups from spreading disinformation.
Before weconclude, it is important to acknowledge the limitations
of this study.
First, the study uses cross-sectional data that limits any causal
inferences. While the findings are consistent with previous
research, longitudinal study frameworks are necessary to
establish causality.
Second, while measuring the eects of social media news use, wedid
not consider the dierences among social media platforms (e.g., WhatsApp
vs. Tiktok). Given the dierences in aordances across platforms, deepfake
sharing behavior may likely vary. erefore, it is recommended to consider
platform dierences for more nuanced observations.
ird, our study is based on an online panel of survey respondents.
While weuse quota sampling strategies to enhance the generalizability of
the ndings, the results may not be representative of the overall
population. e online sample characteristics dier from the general
population (see Supplementary Table D1 for distribution). However, our
investigation focused on a form of online behavior (sharing); therefore,
the representativeness of the ndings should beevaluated accordingly.
Fourth, some of our items are single-item (e.g., perceived
accuracy), and overall, weuse survey methods that are restricted by
social desirability biases. Future studies could use unobtrusive data to
observe deepfake sharing behavior on social media.
Notwithstanding the limitations, this study oers insights into
deepfake propagation on social media by highlighting the role played by
perceived accuracy of disinformation, FOMO, DSR of social media use,
and cognitive ability. Within this setting, werecommend policymakers
that any attempt to reduce the spread of deepfakes on social media should
factor in the individual traits of social media users. Intervention programs
are less likely to succeed if generalized assumptions are made about social
media users, not considering the variance in psychological characteristics
and intellectual abilities of audiences.
Data availability statement
e raw data supporting the conclusions of this article will
bemade available by the authors, without undue reservation.
Ethics statement
e studies involving human participants were reviewed and
approved by Institutional Review Board, Nanyang Technological
University. e patients/participants provided their written informed
consent to participate in this study.
Author contributions
SA designed the study, analyzed the data, and wrote the
manuscript. SN wrote the manuscript. AB analyzed the data and wrote
the manuscript. All authors approved the submitted version.
Funding
is work was supported by Nanyang Technological University
grant number 21093.
Conflict of interest
e authors declare that the research was conducted in the
absence of any commercial or nancial relationships that could
beconstrued as a potential conict of interest.
Publisher’s note
All claims expressed in this article are solely those of the
authors and do not necessarily represent those of their affiliated
organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or
claim that may be made by its manufacturer, is not guaranteed or
endorsed by the publisher.
Supplementary material
e Supplementary material for this article can befound online
at: https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1127507/
full#supplementary-material
References
Ahmed, S. (2021a). Fooled by the fakes: cognitive dierences in perceived claim
accuracy and sharing intention of non-political deepfakes. Pers. Individ. Di.
182:111074. doi: 10.1016/j.paid.2021.111074
Ahmed, S. (2021b). Who inadvertently shares deepfakes? Analyzing the role of
political interest, cognitive ability, and social network size. Tel. Inform. 57:101508. doi:
10.1016/j.tele.2020.101508
Ahmed, S. (2022). Disinformation sharing thrives with fear of missing out among low
cognitive news users: a cross-national examination of intentional sharing of deepfakes.
J. Broadcast. Elec. Media. 66, 89–109. doi: 10.1080/08838151.2022.2034826
Apuke, O. D., Omar, B., Tunca, E. A., and Gever, C. V. (2022). Information overload
and misinformation sharing behaviour of social media users: testing the moderating role
of cognitive ability. J. Inform. Sci. 10:1942. doi: 10.1177/01655515221121942
Ahmed et al. 10.3389/fpsyg.2023.1127507
Frontiers in Psychology 07 frontiersin.org
Balakrishnan, V., Ng, K. S., and Rahim, H. A. (2021). To share or not to share – the
underlying motives of sharing fake news amidst the COVID-19 pandemic in Malaysia.
Technol. Soc. 66:101676. doi: 10.1016/j.techsoc.2021.101676
Bertrams, A., and Dickhäuser, O. (2009). High-school students' need for cognition,
self-control capacity, and school achievement: testing a mediation hypothesis. Learn.
Indiv. Dis. 19, 135–138. doi: 10.1016/j.lindif.2008.06.005
Beyens, I., Frison, E., and Eggermont, S. (2016). “I don’t want to miss a thing”: adolescents’
fear of missing out and its relationship to adolescents’ social needs, Facebook use, and
Facebook related stress. Comp. Hum. Behav. 64, 1–8. doi: 10.1016/j.chb.2016.05.083
Brooks, C. F. (2021). Popular discourse around Deepfakes and the interdisciplinary
challenge of fake video distribution. Cyberpsychol. Behav. Soc. Netw. 24, 159–163. doi:
10.1089/cyber.2020.0183
Cacioppo, J. T., and Petty, R. E. (1982). e need for cognition. J. Persn. Soc. Psych. 42,
116–131. doi: 10.1037/0022-3514.42.1.116
Cochran, J. D., and Napshin, S. A. (2021). Deepfakes: awareness, concerns, and
platform accountability. Cyberpsycy. Beh. Soc. Netw. 24, 164–172. doi: 10.1089/
cyber.2020.0100
Ecker, U. K. H., and Antonio, L. M. (2021). Can youbelieve it? An investigation into
the impact of retraction source credibility on the continued inuence eect. Mem. Cogn.
49, 631–644. doi: 10.3758/s13421-020-01129-y
Ganzach, Y., Hanoch, Y., and Choma, B. L. (2019). Attitudes toward presidential
candidates in the 2012 and 2016 American elections: cognitive ability and support for
trump. Soc. Psychol. Personal. Sci. 10, 924–934. doi: 10.1177/1948550618800494
Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., and Bos, L. (2020). A picture
paints a thousand lies? e eects and mechanisms of multimodal disinformation and
rebuttals disseminated via social media. Polit. Commun. 37, 281–301. doi:
10.1080/10584609.2019.1674979
Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process
analysis: A regression-based approach. New York, NY: Guilford Publications.
Islam, A. K. M. N., Laato, S., Talukder, S., and Sutinen, E. (2020). Misinformation
sharing and social media fatigue during COVID-19: an aordance and cognitive load
perspective. Technol. Forecast. Soc. Change. 159:120201. doi: 10.1016/j.techfore.2020.120201
LaRose, R., and Eastin, M. S. (2004). A social cognitive theory of internet uses and
gratications: toward a new model of media attendance. J. Broadcast. Elec. Media. 48,
358–377. doi: 10.1207/s15506878jobem4803_2
LaRose, R., Lin, C. C., and Eastin, M. S. (2003). Unregulated internet usage: addiction,
habit, or decient self-regulation? Media Psychol. 5, 225–253. doi: 10.1207/
S1532785XMEP0503_01
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., and Cook, J. (2012).
Misinformation and its correction: continued inuence and successful debiasing.
Psychol. Sci. Public Interest 13, 106–131. doi: 10.1177/1529100612451018
Nurse, M. S., Ross, R. M., Isler, O., and Van Rooy, D. (2021). Analytic thinking predicts
accuracy ratings and willingness to share COVID-19 misinformation in Australia. Mem.
Cog. 50, 425–434. doi: 10.3758/s13421-021-01219-5
Przybylski, A. K., Murayama, K., DeHaan, C. R., and Gladwell, V. (2013). Motivational,
emotional, and behavioral correlates of fear of missing out. Comp. Hum. Behav. 29,
1841–1848. doi: 10.1016/j.chb.2013.02.014
Schmeichel, B. J., and Baumeister, R. F. (2004). “Self-regulatory strength” in Handbook
of self-regulation: Research, theory, and applications. eds. R. F. B aumeister and K. D. Vohs
(New York, NY: Guilford Press), 84–98.
Talwar, S., Dhir, A., Kaur, P., Zafar, N., and Alrasheedy, M. (2019). Why do people
share fake news? Associations between the dark side of social media use and fake news
sharing behavior. J. Retail. Consum. Serv. 51, 72–82. doi: 10.1016/j.jretconser.
2019.05.026
orndike, R. L. (1942). Two screening tests of verbal intelligence. J. Appl. Psychol. 26,
128–135. doi: 10.1037/h0060053
t'Serstevens, F., Piccillo, G., and Grigoriev, A. (2022). Fake news zealots: eect of
perception of news on online sharing behavior. Front. Ps ychol. 13:859534. doi: 10.3389/
fpsyg.2022.859534
Vally, Z. (2021). “Compliance with health-protective behaviors in relation to
COVID-19: the roles of health-related misinformation, perceived vulnerability, and
personality traits” in Mental health eects of COVID-19. ed. A. A. Moustafa (Cambridge,
MA: Academic Press), 263–281. doi: 10.1016/b978-0-12-824289-6.00001-5
Vosoughi, S., Roy, D., and Aral, S. (2018). e spread of true and false news online.
Science 359, 1146–1151. doi: 10.1126/science.aap9559
Wechsler, D. (1958). e measurement and appraisal of adult intelligence. 4th ed.
Baltimore, MD: Williams and Wilkins. doi:10.1037/11167-000
Westerlund, M. (2019). e emergence of Deepfake technology: a review. Technol.
Inn ov. Man ag. R ev. 9, 39–52. doi: 10.22215/timreview/1282
Available via license: CC BY
Content may be subject to copyright.