Article
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Presents a spreading-activation theory of human semantic processing, which can be applied to a wide range of recent experimental results. The theory is based on M. R. Quillian's (1967) theory of semantic memory search and semantic preparation, or priming. In conjunction with this, several misconceptions concerning Quillian's theory are discussed. A number of additional assumptions are proposed for his theory to apply it to recent experiments. The present paper shows how the extended theory can account for results of several production experiments by E. F. Loftus, J. F. Juola and R. C. Atkinson's (1971) multiple-category experiment, C. Conrad's (1972) sentence-verification experiments, and several categorization experiments on the effect of semantic relatedness and typicality by K. J. Holyoak and A. L. Glass (1975), L. J. Rips et al (1973), and E. Rosch (1973). The paper also provides a critique of the Rips et al model for categorization judgments. (44 ref)
Article
Full-text available
Deepfakes have a pernicious realism advantage over other common forms of disinformation, yet little is known about how citizens perceive deepfakes. Using the third-person effects framework, this study is one of the first attempts to examine public perceptions of deepfakes. Evidence across three studies in the US and Singapore supports the third-person perception (TPP) bias, such that individuals perceived deepfakes to influence others more than themselves (Study 1–3). The same subjects also show a bias in perceiving themselves as better at discerning deepfakes than others (Study 1–3). However, a deepfakes detection test suggests that the third-person perceptual gaps are not predictive of the real ability to distinguish fake from real (Study 3). Furthermore, the biases in TPP and self-perceptions about their own ability to identify deepfakes are more intensified among those with high cognitive ability (Study 2-3). The findings contribute to third-person perception literature and our current understanding of citizen engagement with deepfakes.
Article
Full-text available
Online media is important for society in informing and shaping opinions, hence raising the question of what drives online news consumption. Here we analyse the causal effect of negative and emotional words on news consumption using a large online dataset of viral news stories. Specifically, we conducted our analyses using a series of randomized controlled trials (N = 22,743). Our dataset comprises ~105,000 different variations of news stories from Upworthy.com that generated ∼5.7 million clicks across more than 370 million overall impressions. Although positive words were slightly more prevalent than negative words, we found that negative words in news headlines increased consumption rates (and positive words decreased consumption rates). For a headline of average length, each additional negative word increased the click-through rate by 2.3%. Our results contribute to a better understanding of why users engage with online media.
Article
Full-text available
Deepfakes are a troubling form of disinformation that has been drawing increasing attention. Yet, there remains a lack of psychological explanations for deepfake sharing behavior and an absence of research knowledge in non-Western contexts where public knowledge of deepfakes is limited. We conduct a cross-national survey study in eight countries to examine the role of fear of missing out (FOMO), deficient self-regulation (DSR), and cognitive ability in deepfake sharing behavior. Results are drawn from a comparative survey in seven South Asian contexts (China, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam) and compare these findings to the United States, where discussions about deepfakes have been most relevant. Overall, the results suggest that those who perceive the deepfakes to be accurate are more likely to share them on social media. Furthermore, in all countries, sharing is also driven by the social-psychological trait – FOMO. DSR of social media use was also found to be a critical factor in explaining deepfake sharing. It is also observed that individuals with low cognitive ability are more likely to share deepfakes. However, we also find that the effects of DSR on social media and FOMO are not contingent upon users’ cognitive ability. The results of this study contribute to strategies to limit deepfakes propagation on social media.
Article
Full-text available
Negativity in the news sells, but is such news also perceived as more credible and shareworthy? Given that negative information is more impactful and processed more easily, a positive-negative asymmetry might also exist in news processing. This negativity bias is explored in a two-part experiment ( N = 696) where respondents rated (a) multiple positive and negative news items and (b) conflicting news on perceived credibility and shareworthiness. Results reveal no straightforward patterns: Audiences only hold a negativity bias in their credibility assessment under certain conditions, and even less so when it comes to sharing news. When confronted with conflicting information, audiences do not seem to use negativity as a cue to determine which news to believe or share.
Article
Full-text available
This study investigates the antecedents of advertent (intentional) deepfakes sharing behavior. Data from two countries (US and Singapore) reveal that social media news use and FOMO are positively associated with intentional deep fakes sharing. Those with lower cognitive ability exhibit higher levels of FOMO and increased sharing behavior. FOMO also has a positive mediation effect on the association among citizens’ news use and sharing of deep fakes. Moderated mediation suggests that the indirect effects of social media news use on advertent sharing through FOMO are more substantial for low than high cognitive individuals. Theoretical implications of the results are discussed.
Article
Full-text available
Hyper-realistic manipulation of audio-visual content, i.e., deepfakes, presents a new challenge for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N=210), we show that (a) people cannot reliably detect deepfakes, and (b) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (c) people are biased towards mistaking deepfakes as authentic videos (rather than vice versa) and (d) overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content.
Article
Full-text available
This paper defends two main theses related to emerging deepfake technology. First, fears that deepfakes will bring about epistemic catastrophe are overblown. Such concerns underappreciate that the evidential power of video derives not solely from its content, but also from its source. An audience may find even the most realistic video evidence unconvincing when it is delivered by a dubious source. At the same time, an audience may find even weak video evidence compelling so long as it is delivered by a trusted source. The growing prominence of deepfake content is unlikely to change this fundamental dynamic. Thus, through appropriate patterns of trust, whatever epistemic threat deepfakes pose can be substantially mitigated. The second main thesis is that focusing on deepfakes that are intended to deceive, as epistemological discussions of the technology tend to do, threatens to overlook a further effect of this technology. Even where deepfake content is not regarded by its audience as veridical, it may cause its viewers to develop psychological associations based on that content. These associations, even without rising to the level of belief, may be harmful to the individuals depicted and more generally. Moreover, these associations may develop in cases in which the video content is realistic, but the audience is dubious of the content in virtue of skepticism toward its source. Thus, even if—as I suggest—epistemological concerns about deepfakes are overblown, deepfakes may nonetheless be psychologically impactful and may do great harm.
Article
Full-text available
The early apprehensions about how deepfakes (also deep fakes) could be weaponized for social and political purposes are now coming to pass. This study is one of the first to examine the social impact of deepfakes. Using an online survey sample in the United States, this study investigates the relationship between citizen concerns regarding deepfakes, exposure to deepfakes, inadvertent sharing of deepfakes, the cognitive ability of individuals, and social media news skepticism. Results suggest that deepfakes exposure and concerns are positively related to social media news skepticism. In contrast, those who frequently rely on social media as a news platform are less skeptical. Higher cognitive abled individuals are more skeptical of news on social media. The moderation findings suggest that among those who are more concerned about deepfakes, inadvertently sharing a deepfake is associated with heightened skepticism. However, these patterns are more pronounced among low than high cognitive individuals.
Article
Full-text available
We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.
Article
Full-text available
The proliferation of misinformation in social media has raised concerns on the veracity of news that citizens consume. Recent scholarship has therefore emphasized the importance of news literacy as higher levels imply greater competence in navigating the streams of information in the social media space. Drawing from subsamples of respondents who use social media for news in seven democracies (UK, Germany, Denmark, Spain, Ireland, Norway, and the US, N = 6774), this comparative analysis examines the dynamics of social media news platform use that influence news literacy. After controlling for demographics, news interest and news use frequency, analyses show that social media news engagement and connections to news organizations and journalists exhibited both positive direct and indirect relationships with news literacy. Multi-platform use of social media for news was also related to engagement, but in five countries the relationship with news literacy was negative.
Article
Full-text available
In recent years, prevalent global societal issues related to fake news, fakery, misinformation, and disinformation were brought to the fore, leading to the construction of descriptive labels such as “post-truth” to refer to the supposedly new emerging era. Thereby, the (mis-)use of technologies such as AI and VR has been argued to potentially fuel this new loss of “ground-truth”, for instance, via the ethically relevant deepfakes phenomena and the creation of realistic fake worlds, presumably undermining experiential veracity. Indeed, unethical and malicious actors could harness tools at the intersection of AI and VR (AIVR) to craft what we call immersive falsehood, fake immersive reality landscapes deliberately constructed for malicious ends. This short paper analyzes the ethically relevant nature of the background against which such malicious designs in AIVR could exacerbate the intentional proliferation of deceptions and falsities. We offer a reappraisal expounding that while immersive falsehood could manipulate and severely jeopardize the inherently affective constructions of social reality and considerably complicate falsification processes, humans may neither inhabit a post-truth nor a post-falsification age. Finally, we provide incentives for future AIVR safety work, ideally contributing to a future era of technology-augmented critical thinking.
Article
Full-text available
Deepfakes are realistic videos created using new machine learning techniques rather than traditional photographic means. They tend to depict people saying and doing things that they did not actually say or do. In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not. Several philosophers (e.g., Deborah Johnson, Luciano Floridi, Regina Rini) have now issued similar warnings. In this paper, I offer an analysis of why deepfakes are such a serious threat to knowledge. Utilizing the account of information carrying recently developed by Brian Skyrms (2010), I argue that deepfakes reduce the amount of information that videos carry to viewers. I conclude by drawing some implications of this analysis for addressing the epistemic threat of deepfakes.
Article
Full-text available
Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment ( N = 278). We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.
Article
Full-text available
The free access to large-scale public databases, together with the fast progress of deep learning techniques, in particular Generative Adversarial Networks, have led to the generation of very realistic fake content with its corresponding implications towards society in this era of fake news. This survey provides a thorough review of techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations. In particular, four types of facial manipulation are reviewed: i) entire face synthesis, ii) identity swap (DeepFakes), iii) attribute manipulation, and iv) expression swap. For each manipulation group, we provide details regarding manipulation techniques, existing public databases, and key benchmarks for technology evaluation of fake detection methods, including a summary of results from those evaluations. Among all the aspects discussed in the survey, we pay special attention to the latest generation of DeepFakes, highlighting its improvements and challenges for fake detection. In addition to the survey information, we also discuss open issues and future trends that should be considered to advance in the field.
Article
Full-text available
From the very first moment of journalism, it is obvious that there are fake news and therefore reliable journalism problems. However as in many fields, rapid technological developments in the last century have had dramatic results in the field of journalism too. The perception of trust in journalism has changed and is changing. Therefore, our age is started to be mentioned as an age of disinformation. It is possible to call deepfake (video-audio manipulation technology based on artificial intelligence) as a new era in this age. As a matter of fact, with deepfake, it has become easy for even ordinary users to display it as if someone has said something they have never said or went to a place they have never been to. Although this situation has the potential to provide wide benefits in various fields, it is possible to say that it will cause big problems in many fields including journalism. Thus, in this article, using descriptive analysis method the general social problems caused by deepfakes are briefly mentioned and it is claimed that reliable journalism is at risk of disappearing if fast and effective measures are not taken.
Article
Full-text available
Artificial Intelligence (AI) now enables the mass creation of what have become known as “deepfakes”: synthetic videos that closely resemble real videos. Integrating theories about the power of visual communication and the role played by uncertainty in undermining trust in public discourse, we explain the likely contribution of deepfakes to online disinformation. Administering novel experimental treatments to a large representative sample of the United Kingdom population allowed us to compare people’s evaluations of deepfakes. We find that people are more likely to feel uncertain than to be misled by deepfakes, but this resulting uncertainty, in turn, reduces trust in news on social media. We conclude that deepfakes may contribute toward generalized indeterminacy and cynicism, further intensifying recent challenges to online civic culture in democratic societies.
Article
Full-text available
Although manipulations of visual and auditory media are as old as media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by the latest technological advances in artificial intelligence and machine learning, deepfakes offer automated procedures to create fake content that is harder and harder for human observers to detect.The possibilities to deceive are endless, including manipulated pictures, videos, and audio, and organizations must be prepared as this will undoubtedly have a large societal impact. In this article, we provide a working definition of deepfakes together with an overview of its underlying technology. We classify different deep-fake types and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deep-fake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection, and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter deep-fake tricks as we appreciate deepfake treats.
Article
Full-text available
Novel digital technologies make it increasingly difficult to distinguish between real and fake media. One of the most recent developments contributing to the problem is the emergence of deepfakes which are hyper-realistic videos that apply artificial intelligence (AI) to depict someone say and do things that never happened. Coupled with the reach and speed of social media, convincing deepfakes can quickly reach millions of people and have negative impacts on our society. While scholarly research on the topic is sparse, this study analyzes 84 publicly available online news articles to examine what deepfakes are and who produces them, what the benefits and threats of deepfake technology are, what examples of deepfakes there are, and how to combat deepfakes. The results suggest that while deepfakes are a significant threat to our society, political system and business, they can be combatted via legislation and regulation, corporate policies and voluntary action, education and training, as well as the development of technology for deepfake detection, content authentication, and deepfake prevention. The study provides a comprehensive review of deepfakes and provides cybersecurity and AI entrepreneurs with business opportunities in fighting against media forgeries and fake news.
Article
Full-text available
The current study examined false memories in the week preceding the 2018 Irish abortion referendum. Participants ( N = 3,140) viewed six news stories concerning campaign events—two fabricated and four authentic. Almost half of the sample reported a false memory for at least one fabricated event, with more than one third of participants reporting a specific memory of the event. “Yes” voters (those in favor of legalizing abortion) were more likely than “no” voters to “remember” a fabricated scandal regarding the campaign to vote “no,” and “no” voters were more likely than “yes” voters to “remember” a fabricated scandal regarding the campaign to vote “yes.” This difference was particularly strong for voters of low cognitive ability. A subsequent warning about possible misinformation slightly reduced rates of false memories but did not eliminate these effects. This study suggests that voters in a real-world political campaign are most susceptible to forming false memories for fake news that aligns with their beliefs, in particular if they have low cognitive ability.
Article
Full-text available
Significance Many people consume news via social media. It is therefore desirable to reduce social media users’ exposure to low-quality news content. One possible intervention is for social media ranking algorithms to show relatively less content from sources that users deem to be untrustworthy. But are laypeople’s judgments reliable indicators of quality, or are they corrupted by either partisan bias or lack of information? Perhaps surprisingly, we find that laypeople—on average—are quite good at distinguishing between lower- and higher-quality sources. These results indicate that incorporating the trust ratings of laypeople into social media ranking algorithms may prove an effective intervention against misinformation, fake news, and news content with heavy political bias.
Article
Full-text available
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Article
Full-text available
AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use in the digital sphere.
Article
Full-text available
Visuals in news media help define, or frame issues, but less is known about how they influence opinions and behavior. The authors use an experiment to present image and text exemplars of frames from war and conflict news in isolation and in image-text congruent and incongruent pairs. Results show that, when presented alone, images generate stronger framing effects on opinions and behavioral intentions than text. When images and text are presented together, as in a typical news report, the frame carried by the text influences opinions regardless of the accompanying image, whereas the frame carried by the image drives behavioral intentions irrespective of the linked text. These effects are explained by the salience enhancing and emotional consequences of visuals.
Article
Full-text available
We review recent evidence revealing that the mere willingness to engage analytic reasoning as a means to override intuitive “gut feelings” is a meaningful predictor of key psychological outcomes in diverse areas of everyday life. For example, those with a more analytic thinking style are more skeptical about religious, paranormal, and conspiratorial concepts. In addition, analytic thinking relates to having less traditional moral values, making less emotional or disgust-based moral judgments, and being less cooperative and more rationally self-interested in social dilemmas. Analytic thinkers are even less likely to offload thinking to smartphone technology and may be more creative. Taken together, these results indicate that the propensity to think analytically has major consequences for individual psychology.
Article
Purpose Deepfake information poses more ethical risks than traditional disinformation in terms of fraud, slander, rumors and other malicious uses. However, owing to its high entertainment value, deepfake information with ethical risks has become popular. This study aims to understand the role of ethics and entertainment in the acceptance and regulation of deepfake information. Design/methodology/approach Mixed methods were used to qualitatively identify ethical concerns and quantitatively evaluate the influence of ethical concerns and perceived enjoyment on the ethical acceptability and social acceptance of deepfake information. Findings The authors confirmed that informed consent, privacy protection, traceability and non-deception had a significantly positive impact on ethical acceptability and indirectly influenced social acceptance, with privacy protection being the most sensitive. Perceived enjoyment impacts the social acceptance of deepfake information and significantly weakens the effect of ethical acceptability on social acceptance. Originality/value The ethical concerns affecting acceptance behavior identified in this study provide an entry point for the ethical regulation of deepfake information. The weakening effect of perceived enjoyment on ethics serves as a wake-up call for regulators to guard against pan-entertainment deepfake information.
Article
The current research investigated (a) if political identity predicts perceived truthfulness of and the intention to share partisan news, and (b) if a media literacy video that warns of misinformation (priming-video) mitigates the partisan bias by enhancing truth discernment. To evaluate if heightened salience of misinformation accounts for the effects of the media literacy intervention, we also tested if recalling prior exposure to misinformation (priming-question) would yield the same results as watching the literacy video does. Two web-based experiments were conducted in South Korea. In Study 1 (N = 384), both liberals and conservatives found politically congenial information more truthful and shareworthy. Although misinformation priming lowered perceived truthfulness and sharing intention of partisan news, such effects were greater for false, rather than true information, thereby improving truth discernment. Study 2 (N = 600) replicated Study 1 findings, except that the misinformation priming lowered perceived truthfulness and the sharing intention across the board, regardless of the veracity of information. Collectively, our findings demonstrate the robust operation of partisan bias in the processing and sharing of partisan news. Misinformation priming aided in the detection of falsehood, but it also induced distrust in reliable information, posing a challenge in fighting misinformation.
Article
Today, digitalization is affecting all areas of life, such as education or work. The competent use of digital systems (esp. information and communication technologies [ICT]) has thus become an essential skill. Despite longstanding research on human-technology interaction and diverse theoretical approaches describing competences for interacting with digital systems, research still offers mixed results regarding the structure of digital competences. Self-efficacy is described as one of the most critical determinants of competent digital system use, and various self-report scales for assessing digital self-efficacy have been suggested. Yet, these scales largely differ in their proposed specificity, structure, validation, and timeliness. The present study aims at providing a systematic overview and comparison of existing measures of digital self-efficacy (DSE) to current theoretical digital competence frameworks. Further, we present a newly developed scale that assesses digital self-efficacy in heterogeneous adult populations, theoretically founded in the DigComp 2.1 and social-cognition theory. The factorial structure of the DSE scale is assessed to investigate multidimensionality. Further, the scale is validated considering the nomological network (actual ICT use, technophobia). Implications for research and practice are discussed.
Article
With rapid technical advancements, deepfakes-i.e., hyper-realistic fake videos using face swaps-have become more widespread and easier to create, challenging the old notion of "seeing is believing." Despite raised concerns over potential impacts of deepfakes on people's credibility toward audiovisual evidence in journalism, systematic investigation of the topic has been lacking. This study conducted an experiment (N = 230) that tested (1) how a news article using deepfake video (vs. real video) affects news credibility and viral behavioral intentions and (2) whether, based on signaling theory, obtaining knowledge about the low cost of producing deepfakes reduces the impact of deepfake news. Results show that people whose pre-existing attitudes toward controversial issues (abortion, marijuana legalization) are congruent with the advocated position of a news article are more likely to believe and be willing to share deepfake news as much as real video news. In addition, educating participants about the low cost of producing deepfakes was effective in reducing the credibility and viral behavioral intention of deepfake news for those who have congruent issue attitudes. This study provides evidence for differing levels of susceptibility for deepfake news and the importance of media literacy education regarding deepfakes that would prevent biased reasoning.
Article
Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.
Article
This study investigated the influence of computer self-efficacy, computer anxiety, and cognitive skills on the use of electronic library resources by social science undergraduates in a tertiary institution in Nigeria. Survey research design was adopted and stratified random sampling technique was used to select 869 sample size from a population of 1452 social science undergraduates across five departments. A total of 793 questionnaire was properly filled and collated which equals a response rate of 91.3% from the population sample. Findings from the study revealed that there were significant relationships among computer selfefficacy, computer anxiety, cognitive skills, and use of electronic library resources by the respondents. Computer self-efficacy, computer anxiety, and cognitive skills individually and jointly had a significant influence on the use of electronic library resources of the respondents. Therefore, library management in the tertiary institution should give due consideration to computer self-efficacy, computer anxiety, and cognitive skills of the respondents when planning to enhance their use of electronic library resources among others.
Article
We examine how individual differences influence perceived accuracy of deepfake claims and sharing intention. Rather than political deepfakes, we use a non-political deepfake of a social media influencer as the stimulus, with an educational and a deceptive condition. We find that individuals are more likely to perceive the deepfake claim to be true when informative cues are missing along with the deepfake (compared to when they are present). Also, individuals are more likely to share deepfakes when they consider the fabricated claim to be accurate. Moreover, we find that cognitive ability plays a moderating role such that when informative cues are present (educational condition), individuals with high cognitive ability are less trustful of deepfake claims. Unexpectedly, when the informative cues are missing (deceptive condition), these individuals are more likely to consider the claim to be true and share them. The findings suggest that adding corrective labels can help reduce inadvertent sharing of disinformation. Also, user biases should be considered in understanding public engagement with disinformation.
Article
Despite a great deal of research, much about the effects of political comedy programming on its viewers remains uncertain. One promising line of work has focused on increased internal political efficacy—the sense that one is competent to engage with politics—as an outcome of exposure to political comedy programs. This may explain results showing that viewers are more likely to participate in politics. We extend this approach by considering the role of political comedy’s “gateway” effect in encouraging political media consumption, which can promote additional increases in efficacy and participation. This study provides a theoretical synthesis of prior research and a rigorous empirical test using a representative panel survey of adults in the United States, providing evidence of a relationship between political comedy and participation with both news use and internal efficacy serving as mediators. Furthermore, we find that only political satire, not late-night talk shows, appear to produce these effects.
Chapter
Deep Learning (DL) has been widely adopted in the domain of cybersecurity to address a variety of security and privacy concerns. Moreover, in recent years attackers are also increasingly adopting deep learning to either develop new sophisticated DL-based security attacks, such as Deepfakes. Recently Deepfake technology is used to spread misinformation on social networking. Currently, the most popular algorithm for deepfake image generation is GANs. The goal of this paper is to adopt DL-based smart detection techniques to defend against smart misinformation. We develop a set of hands-on labs to integrate them in our cybersecurity curriculum so that our students, future cybersecurity professionals, can be educated to use detect software and identify Deepfakes. Finally, we will investigate the fundamental capabilities, challenges, and limitations of deep learning for detecting smart attacks.
Article
Fake news, deliberately inaccurate and often biased information that is presented as accurate reporting, is perceived as a serious threat. Recent research on fake news has documented a high general susceptibility to the phenomenon and has focused on investigating potential explanatory factors. The present study examined how features of news headlines affected their perceived accuracy. Across four experiments (total N = 659), we examined the effects of pictures, perceptual clarity, and repeated exposure on the perceived accuracy of news headlines. In all experiments, participants received a set of true and false news headlines and rated their accuracy. The presence of pictures and repeated exposure increased perceived accuracy, whereas manipulations of perceptual clarity did not show the predicted effects. The effects of pictures and repeated exposure were similar for true and false headlines. These results demonstrate that accompanying pictures and repeated exposure can affect evaluations of truth of news headlines.
Article
Deep fakes have rapidly emerged as one of the most ominous concerns within modern society. The ability to easily and cheaply generate convincing images, audio, and video via artificial intelligence will have repercussions within politics, privacy, law, security, and broadly across all of society. In light of the widespread apprehension, numerous technological efforts aim to develop tools to distinguish between reliable audio/video and the fakes. These tools and strategies will be particularly effective for consumers when their guard is naturally up, for example during election cycles. However, recent research suggests that not only can deep fakes create credible representations of reality, but they can also be employed to create false memories. Memory malleability research has been around for some time, but it relied on doctored photographs or text to generate fraudulent recollections. These recollected but fake memories take advantage of our cognitive miserliness that favors selecting those recalled memories that evoke our preferred weltanschauung. Even responsible consumers can be duped when false but belief-consistent memories, implanted when we are least vigilant can, like a Trojan horse, be later elicited at crucial dates to confirm our pre-determined biases and influence us to accomplish nefarious goals. This paper seeks to understand the process of how such memories are created, and, based on that, proposing ethical and legal guidelines for the legitimate use of fake technologies.
Article
Amid growing concerns about misinformation on social media, scholars, educators, and commentators see news literacy as a means to improve critical media consumption. We use a nationally-representative sample to investigate the relationship between news literacy (NL), seeing and posting news and political content on social media, and skepticism toward information shared on social media. This study finds NL and related orientations contribute to who is seeing and sharing information on social media, with those who are more knowledgeable about media structures seeing and sharing less content. Moreover, those who are more news literate and value NL are more skeptical of information quality on social media. Seeing and posting news and political content on social media are not associated with skepticism. This study suggests that NL plays an important role in shaping perceptions of information shared online.
Article
People are more inclined to believe that information is true if they have encountered it before. Little is known about whether this illusory truth effect is influenced by individual differences in cognition. In seven studies (combined N = 2,196), using both trivia statements (Studies 1-6) and partisan news headlines (Study 7), we investigate moderation by three factors that have been shown to play a critical role in epistemic processes: cognitive ability (Studies 1, 2, 5), need for cognitive closure (Study 1), and cognitive style, that is, reliance on intuitive versus analytic thinking (Studies 1, 3-7). All studies showed a significant illusory truth effect, but there was no evidence for moderation by any of the cognitive measures across studies. These results indicate that the illusory truth effect is robust to individual differences in cognitive ability, need for cognitive closure, and cognitive style.
Article
Fake news has become a prominent topic of public discussion, particularly amongst elites. Recent research has explored the prevalence of fake news during the 2016 election cycle and possible effects on electoral outcomes. This scholarship has not yet considered how elite discourse surrounding fake news may influence individual perceptions of real news. Through an experiment, this study explores the effects of elite discourse about fake news on the public’s evaluation of news media. Results show that exposure to elite discourse about fake news leads to lower levels of trust in media and less accurate identification of real news. Therefore, frequent discussion of fake news may affect whether individuals trust news media and the standards with which they evaluate it. This discourse may also prompt the dissemination of false information, particularly when fake news is discussed by elites without context and caution.
Article
This article examines online recruitment via Facebook, Mechanical Turk (MTurk), and Qualtrics panels in India and the United States. It compares over 7300 respondents—1000 or more from each source and country—to nationally representative benchmarks in terms of demographics, political attitudes and knowledge, cooperation, and experimental replication. In the United States, MTurk offers the cheapest and fastest recruitment, Qualtrics is most demographically and politically representative, and Facebook facilitates targeted sampling. The India samples look much less like the population, though Facebook offers broad geographical coverage. We find online convenience samples often provide valid inferences into how partisanship moderates treatment effects. Yet they are typically unrepresentative on such political variables, which has implications for the external validity of sample average treatment effects.
Article
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
Article
The present experiment (N =390) examined how people adjust their judgment after they learn that crucial information on which their initial evaluation was based is incorrect. In line with our expectations, the results showed that people generally do adjust their attitudes, but the degree to which they correct their assessment depends on their cognitive ability. In particular, individuals with lower levels of cognitive ability adjusted their attitudes to a lesser extent than individuals with higher levels of cognitive ability. Moreover, for those with lower levels of cognitive ability, even after the explicit disconfirmation of the false information, adjusted attitudes remained biased and significantly different from the attitudes of the control group who was never exposed to the incorrect information. In contrast, the adjusted attitudes of those with higher levels of cognitive ability were similar to those of the control group. Controlling for need for closure and right-wing authoritarianism did not influence the relationship between cognitive ability and attitude adjustment. The present results indicate that, even in optimal circumstances, the initial influence of incorrect information cannot simply be undone by pointing out that this information was incorrect, especially in people with relatively lower cognitive ability.
Article
Departing from the conventional approach that emphasizes civic and political motives for political engagement, this study investigates how political social media behaviors—political expression—might emerge out of everyday, non-political use of the sites from an interpersonal communication perspective. Using two separate adult samples of Facebook (n = 727) and Twitter users (n = 663), this study examines how non-political, passive (NPP, consuming non-political content) and non-political, active (NPA, producing non-political content) social media use relate to expression of political voice on the sites. Findings show that only NPA use is positively associated with increased political expression, and this relationship is partially explained by political efficacy. The patterns of findings are consistent across Facebook and Twitter.