Conference Paper

Are Deep Learning-Generated Social Media Profiles Indistinguishable from Real Profiles?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With the emergence of generative AI, we face the possibility of generating textual or visual data based on pre-existing datasets, expanding the use of synthetic data from purely numerical research to a vast range of fields and applications. These new examples of AI-based synthetic data in research showcase survey responses (Jansen et al., 2023) or synthetic social media posts (Rossi et al., 2023), where Large Language Models (LLMs) or Generative Adversarial Networks (GAN) are used to either augment or recreate real datasets for research purposes. ...
... In such cases, the potential for analytical and critical transparency is very high, as the assessment process has a clear reference. Emergent synthetic data refers to the cases where, similarly to complex systems, there is not a real dataset or enough knowledge about the real phenomenon to evaluate the data directly, e.g., generation of fake personas (Rossi et al., 2023). In these cases, only statistical inferences or subjective evaluations can be made, reducing the level of critical transparency that can be achieved for that type of data. ...
... Example: Fake persona social network profile (Rossi et al., 2023). control theory and theorized on how they relate to the generation of synthetic data. ...
Conference Paper
Full-text available
Generative AI is paving its way into the research process. Among the plethora of available generative AI solutions, the generation of synthetic data is one of the most controversial. The current division of opinion and the lack of formal approach to AI use in research create a situation of conflicting bad practices and under-used potential. This work aims to add nuance and structure to this research practice by providing a general framework to evaluate the use of synthetic data in different stages of the research process, based on the objective and methods of generation. Relying on a breakout literature review, we explore the fields of Data quality management and Control theory to transfer method theories from these fields to help us build the framework. The resulting conceptual framework provides an iterative scheme where, based on the desired properties of the data and its comparison to the synthetic result, the researcher can improve the outcome of the generation process and, equivalently, formally present the properties that make this data suitable for research.
... We have noticed some peculiarities in the approach to compiling the dataset of natural and AI-generated faces in this survey: the AI-created faces looked more smiley and generally more friendly than the natural ones did, which created a potential for bias in the judgements. Still, several other studies provided further support to the claims made by Nightingale and Farid (Sergi D Bray, Johnson, and Kleinberg 2023;Lago et al. 2022;Rossi et al. 2022;Tucciarelli et al. 2022). ...
... In this study, we wanted to investigate whether the mean morphometric features -including symmetry and shape variance, here measured as morphological disparity -of AI-generated faces are the same as those of natural faces. Unlike several previous studies (Bray, Johnson, and Kleinberg 2023;Lago et al. 2022;Nightingale and Farid 2022;Rossi et al. 2022;Tucciarelli et al. 2022), we used standardized synthetized faces with a neutral expression and compared them to natural faces selected from our database of standardized facial portraits. Recent studies have shown that humans are no longer able to distinguish artificially generated facial stimuli from portrait photographs of real human beings. ...
... Moreover, AI-synthetized faces are less variable in facial shape, i.e., show lower morphological disparity, and have lower levels of facial asymmetry that natural faces do. From the perspective of objectively quantifiable morphometric measurements, artificial and natural faces are still distinguishable, although people cannot see these differences (Bray et al. 2023;Lago et al. 2022;Nightingale and Farid 2022;Rossi et al. 2022;Tucciarelli et al. 2022). ...
Article
Full-text available
Some recent studies suggest that artificial intelligence can create realistic human faces subjectively unrecognizable from faces of real people. We have compared static facial photographs of 197 real men with a sample of 200 male faces generated by artificial intelligence to test whether they converge in basic morphological characteristic such as shape variation and bilateral asymmetry. Both datasets depicted standardized faces of European men with a neutral expression. Then we used geometric morphometrics to investigate their facial morphology and calculate the measures of shape variation and asymmetry. We found that the natural faces of real individuals were more variable in their facial shape than the artificially generated faces were. Moreover, the artificially synthesized faces showed lower levels of facial asymmetry than the control group. Despite the rapid development of generative adversarial networks, natural faces are thus still statistically distinguishable from the artificial ones by objective measurements. We recommend the researchers in face perception, that aim to use artificially generated faces as ecologically valid stimuli, to check whether their stimuli morphological variance is comparable with that of natural faces in a target population.
... The surge of Generative AI is both a testimony of vibrancy of the AI industry and a potential threat to social security. AI generated images are becoming more prevalent on social media, yet human eyes can hardly spot images generated by most advanced generation models [9]. Image generation contributes significantly to the spread of fake news, which may manipulate public opinion and disturb political stability. ...
Conference Paper
Full-text available
The significant improvement in AI image generation in recent years poses serious threats to social security, as AI generated misinformation may infringe upon political stability, personal privacy, and digital copy rights of artists. Building an AI generated image detector that accurately identifies generated image is crucial to maintain the social security and property rights of artists. This paper introduces preprocessing pipeline that uses positional encoded azimuthal integrals for image patches to create fingerprints that encapsulate distinguishing features. We then trained a multi-head attention model with 97.5% accuracy on classification of the fingerprints. The model also achieved 80% accuracy on images generated by AI models not presented in the training dataset, demonstrating the robustness of our pipeline and the potential of broader application of our model.
... With the rapid advancement of AI algorithms, the content generated by AI, such as social media feeds, can be indistinguishable from human-generated content (Rossi et al., 2023). The availability of several AI applications, such as ChatGPT, Bard, Microsoft Copilot, and DALL-E, has sparked considerable interest and adoption of AI. ...
Conference Paper
Full-text available
Artificial intelligence (AI) has seen rapid development in recent years and it has increasingly applied to various fields. Research is no exception. However, there is much to be explored in this domain. This study aims to explore the suitability of current generative AI applications for research purposes. The focus is on the generative AI's capability to synthesise information as a potential alternative or supplement to human-based information synthesisation. In order to evaluate the effectiveness of the thematic analysis produced by generative AI, this study compares the generative AI-produced results by ChatGPT with human-generated results, based on the same set of papers. The results show generative AI produced very similar results to humans, in terms of the topics themselves and the number of topics identified. However, there are also some minor mismatches between generative AI and human results.
... A Pew study in 2018 estimated that bot accounts make up a large portion of accounts that share links (roughly 66%) on Twitter (Wojcik et al. 2018). Future research should consider the role of bot accounts and the potential for AI-generated content when analyzing social media data, particularly as detecting AI-generated content and accounts becomes increasingly difficult (Rossi et al. 2022). Additionally, we focus on a relatively short window of time in sampling the Twitter data. ...
Article
Full-text available
The COVID‐19 pandemic has catalyzed debates about how the public and leaders respond to health threats and the role that the media and emotions play in these responses. Predating COVID‐19, the 2014 Ebola outbreak can serve as a case to examine the constructions and pervasiveness of fear discourse and other emotions in news and social media. In this mixed‐method study, we examine fear discourse in web‐based and traditional newspaper headlines and emergent emotions in social media data (Twitter) during the peak of Ebola coverage. Users discuss fear on Twitter in a variety of ways and there was an increase in Tweets following the first Ebola case in the United States. However, it is humor, not fear, that is the most dominant theme in Twitter responses. Claims by health leaders and media scholars, that information technology and social media spread fear, receive limited support. Prevalence of different emotions vary across format (headlines and social media) and have important implications for understanding the myths and realities of public responses to health threats.
Article
The explosion of online social networks in recent decades has significantly improved in which the way individuals communicate with one another. People trust social networks bluntly without knowing the origin and genuinity of the information passed through these networks. Sometimes, unreliable information on online social networks misleads the viewers, and it brings unremovable stains to humanity. Online social networks transform even the original information of the government, which create confusion among the people and people loses confidence over the government. Various types of research have been conducted to identify fake news with high efficiency. In this survey, we describe the basic theories of fake news, investigate and analyze the perspective on fake news, attribute misleading information, an in-depth analysis of disinformation, and methods that have been established for detection. To our knowledge, this research article will assist in facilitating collaborative activities among technical experts, political campaigns, online purchases, and other disciplines that are being used to investigate fake messages.
Article
Full-text available
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.
Conference Paper
Full-text available
Text generation has become one of the most important yet challenging tasks in natural language processing (NLP). The resurgence of deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs). In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation. As the preliminaries, we present the general task definition and briefly describe the mainstream architectures of PLMs for text generation. As the core content, we discuss how to adapt existing PLMs to model different input data and satisfy special properties in the generated text. We further summarize several important fine-tuning strategies for text generation. Finally, we present several future directions and conclude this paper. Our survey aims to provide text generation researchers a synthesis and pointer to related research.
Conference Paper
Full-text available
The omnipresent COVID-19 pandemic gave rise to a parallel spreading of misinformation, also referred to as an ‘Infodemic’. Consequently, social media have become targets for the application of social bots, that is, algorithms that mimic human behaviour. Their ability to exert influence on social media can be exploited by amplifying misinformation, rumours, or conspiracy theories which might be harmful to society and the mastery of the pandemic. By applying social bot detection and content analysis techniques, this study aims to determine the extent to which social bots interfere with COVID-19 discussions on Twitter. A total of 78 presumptive bots were detected within a sample of 542,345 users. The analysis revealed that bot-like users who disseminate misinformation, at the same time, intersperse news from renowned sources. The findings of this research provide implications for improved bot detection and managing potential threats through social bots during ongoing and future crises.
Article
Full-text available
On the morning of November 9th 2016, the world woke up to the shocking outcome of the US Presidential elections: Donald Trump was the 45th President of the United States of America. An unexpected event that still has tremendous consequences all over the world. Today, we know that a minority of social bots – automated social media accounts mimicking humans – played a central role in spreading divisive messages and disinformation, possibly contributing to Trump's victory [16, 19]. In the aftermath of the 2016 US elections, the world started to realize the gravity of widespread deception in social media. Following Trump's exploit, we witnessed to the emergence of a strident dissonance between the multitude of efforts for detecting and removing bots, and the increasing effects that these malicious actors seem to have on our societies [27, 29]. This paradox opens a burning question: What strategies should we enforce in order to stop this social bot pandemic? In these times – during the run-up to the 2020 US elections – the question appears as more crucial than ever. Particularly so, also in light of the recent reported tampering of the electoral debate by thousands of AI-powered accounts. What stroke social, political and economic analysts after 2016 – deception and automation – has been however a matter of study for computer scientists since at least 2010. In this work, we briefly survey the first decade of research in social bot detection. Via a longitudinal analysis, we discuss the main trends of research in the fight against bots, the major results that were achieved, and the factors that make this never-ending battle so challenging. Capitalizing on lessons learned from our extensive analysis, we suggest possible innovations that could give us the upper hand against deception and manipulation. Studying a decade of endeavors at social bot detection can also inform strategies for detecting and mitigating the effects of other – more recent – forms of online deception, such as strategic information operations and political trolls.
Article
Full-text available
The release of openly available, robust natural language generation algorithms (NLG) has spurred much public attention and debate. One reason lies in the algorithms' purported ability to generate humanlike text across various domains. Empirical evidence using incentivized tasks to assess whether people (a) can distinguish and (b) prefer algorithm-generated versus human-written text is lacking. We conducted two experiments assessing behavioral reactions to the state-of-the-art Natural Language Generation algorithm GPT-2 (Ntotal = 830). Using the identical starting lines of human poems, GPT-2 produced samples of poems. From these samples, either a random poem was chosen (Human-out-of-theloop) or the best one was selected (Human-in-the-loop) and in turn matched with a human-written poem. In a new incentivized version of the Turing Test, participants failed to reliably detect the algorithmicallygenerated poems in the Human-in-the-loop treatment, yet succeeded in the Human-out-of-the-loop treatment. Further, people reveal a slight aversion to algorithm-generated poetry, independent on whether participants were informed about the algorithmic origin of the poem (Transparency) or not (Opacity). We discuss what these results convey about the performance of NLG algorithms to produce human-like text and propose methodologies to study such learning algorithms in human-agent experimental settings. Artificial intelligence (AI), "the development of machines capable of sophisticated (intelligent) information processing" (Dafoe, 2018, p. 5), is rapidly advancing and has begun to take over tasks previously performed solely by humans (Rahwan et al., 2019). Algorithms are already assisting humans in writing text, such as autocompleting sentences in emails and even helping writers write novels (Streitfeld, 2018, pp.
Conference Paper
Full-text available
Propaganda campaigns aim at influencing people's mindset with the purpose of advancing a specific agenda. They exploit the anonymity of the Internet, the micro-profiling ability of social networks, and the ease of automatically creating and managing coordinated networks of accounts, to reach millions of social network users with persuasive messages, specifically targeted to topics each individual user is sensitive to, and ultimately influencing the outcome on a targeted issue. In this survey, we review the state of the art on computational propaganda detection from the perspective of Natural Language Processing and Network Analysis, arguing about the need for combined efforts between these communities. We further discuss current challenges and future research directions.
Article
Full-text available
The spread of online misinformation poses serious challenges to societies worldwide. In a novel attempt to address this issue, we designed a psychological intervention in the form of an online browser game. In the game, players take on the role of a fake news producer and learn to master six documented techniques commonly used in the production of misinformation: polarisation, invoking emotions, spreading conspiracy theories, trolling people online, deflecting blame, and impersonating fake accounts. The game draws on an inoculation metaphor, where preemptively exposing, warning, and familiarising people with the strategies used in the production of fake news helps confer cognitive immunity when exposed to real misinformation. We conducted a large-scale evaluation of the game with N = 15,000 participants in a pre-post gameplay design. We provide initial evidence that people's ability to spot and resist misinformation improves after gameplay, irrespective of education, age, political ideology, and cognitive style.
Article
Full-text available
Information systems such as social media strongly influence public opinion formation. Additionally, communication on the internet is shaped by individuals and organisations with various aims. This environment has given rise to phenomena such as manipulated content, fake news, and social bots. To examine the influence of manipulated opinions, we draw on the spiral of silence theory and complex adaptive systems. We translate empirical evidence of individual behaviour into an agent-based model and show that the model results in the emergence of a consensus on the collective level. In contrast to most previous approaches, this model explicitly represents interactions as a network. The most central actor in the network determines the final consensus 60–70% of the time. We then use the model to examine the influence of manipulative actors such as social bots on public opinion formation. The results indicate that, in a highly polarised setting, depending on their network position and the overall network density, bot participation by as little as 2–4% of a communication network can be sufficient to tip over the opinion climate in two out of three cases. These findings demonstrate a mechanism by which bots could shape the norms adopted by social media users.
Article
Full-text available
The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
Article
Full-text available
Addressing fake news requires a multidisciplinary effort
Article
Full-text available
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
Conference Paper
Full-text available
So-called 'social bots' have garnered a lot of attention lately. Previous research showed that they attempted to influence political events such as the Brexit referendum and the US presidential elections. It remains, however, somewhat unclear what exactly can be understood by the term 'social bot'. This paper addresses the need to better understand the intentions of bots on social media and to develop a shared understanding of how 'social' bots differ from other types of bots. We thus describe a systematic review of publications that researched bot accounts on social media. Based on the results of this literature review, we propose a scheme for categorising bot accounts on social media sites. Our scheme groups bot accounts by two dimensions - Imitation of human behaviour and Intent.
Conference Paper
Full-text available
Recent studies in social media spam and automation provide anecdotal argumentation of the rise of a new generation of spambots, so-called social spambots. Here, for the first time, we extensively study this novel phenomenon on Twitter and we provide quantitative evidence that a paradigm-shift exists in spambot design. First, we measure current Twitter's capabilities of detecting the new social spambots. Later, we assess the human performance in discriminating between genuine accounts, social spambots, and traditional spambots. Then, we benchmark several state-of-the-art techniques proposed by the academic literature. Results show that neither Twitter, nor humans, nor cutting-edge applications are currently capable of accurately detecting the new social spambots. Our results call for new approaches capable of turning the tide in the fight against this raising phenomenon. We conclude by reviewing the latest literature on spambots detection and we highlight an emerging common research trend based on the analysis of collective behaviors. Insights derived from both our extensive experimental campaign and survey shed light on the most promising directions of research and lay the foundations for the arms race against the novel social spambots. Finally, to foster research on this novel phenomenon, we make publicly available to the scientific community all the datasets used in this study.
Article
Full-text available
This article argues that the study conducted by Facebook in conjunction with Cornell University did not have sufficient ethical oversight, and neglected in particular to obtain necessary informed consent from the participants in the study. It establishes the importance of informed consent in Internet research ethics and suggests that in Facebook's case (and other, similar cases), a reasonable shift could be made from traditional medical ethics 'effective consent' to a 'waiver of normative expectations', although this would require much-needed change to the company's standard practice. Finally, it gives some practical recommendations for how to implement such consent strategies, and how the ethical oversight gap between university-led research and industry-led research can be bridged, potentially using emerging Responsible Research and Innovation frameworks which are currently gathering momentum in Europe.
Conference Paper
Full-text available
As popular tools for spreading spam and malware, Sybils (or fake accounts) pose a serious threat to online communities such as Online Social Networks (OSNs). Today, sophisticated attackers are creating realistic Sybils that effectively befriend legitimate users, rendering most automated Sybil detection techniques ineffective. In this paper, we explore the feasibility of a crowdsourced Sybil detection system for OSNs. We conduct a large user study on the ability of humans to detect today's Sybil accounts, using a large corpus of ground-truth Sybil accounts from the Facebook and Renren networks. We analyze detection accuracy by both " experts " and " turkers " under a variety of conditions, and find that while turkers vary significantly in their effectiveness , experts consistently produce near-optimal results. We use these results to drive the design of a multi-tier crowd-sourcing Sybil detection system. Using our user study data, we show that this system is scalable, and can be highly effective either as a standalone system or as a complementary technique to current tools.
Article
Full-text available
The Turing test asked whether one could recognize the behavior of a human from that of a computer algorithm. Today this question has suddenly become very relevant in the context of social media, where text constraints limit the expressive power of humans, and real incentives abound to develop human-mimicking software agents called social bots. These elusive entities wildly populate social media ecosystems, often going unnoticed among the population of real people. Bots can be benign or harmful, aiming at persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then discuss current efforts aimed at detection of social bots in Twitter. Characteristics related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.
Article
Full-text available
Data extracted from social networks like Twitter are increasingly being used to build applications and services that mine and summarize public reactions to events, such as traffic monitoring platforms, identification of epidemic outbreaks, and public perception about people and brands. However, such services are vulnerable to attacks from socialbots - automated accounts that mimic real users - seeking to tamper statistics by posting messages generated automatically and interacting with legitimate users. Potentially, if created in large scale, socialbots could be used to bias or even invalidate many existing services, by infiltrating the social networks and acquiring trust of other users with time. This study aims at understanding infiltration strategies of socialbots in the Twitter microblogging platform. To this end, we create 120 socialbot accounts with different characteristics and strategies (e.g., gender specified in the profile, how active they are, the method used to generate their tweets, and the group of users they interact with), and investigate the extent to which these bots are able to infiltrate the Twitter social network. Our results show that even socialbots employing simple automated mechanisms are able to successfully infiltrate the network. Additionally, using a 2k2^k factorial design, we quantify infiltration effectiveness of different bot strategies. Our analysis unveils findings that are key for the design of detection and counter measurements approaches.
Article
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
Article
News—real or fake—is now abundant on social media. News posts on social media focus users’ attention on the headlines, but does it matter who wrote the article? We investigate whether changing the presentation format to highlight the source of the article affects its believability and how social media users choose to engage with it. We conducted two experiments and found that nudging users to think about who wrote the article influenced the extent to which they believed it. The presentation format of highlighting the source had a main effect; it made users more skeptical of all articles, regardless of the source’s credibility. For unknown sources, low source ratings had a direct effect on believability. Believability, in turn, influenced the extent to which users would engage with the article (e.g., read, like, comment, and share). We also found confirmation bias to be rampant: users were more likely to believe articles that aligned with their beliefs, over and above the effects of other factors.
Conference Paper
This work investigates how social bots can phish employees of organizations, and thus endanger corporate network security. Current literature mostly focuses on traditional phishing methods (through e-mail, phone calls, and USB sticks). We address the serious organizational threats and security risks caused by phishing through online social media, specifically through Twitter. This paper first provides a review of current work. It then describes our experimental development, in which we created and deployed eight social bots on Twitter, each associated with one specific subject. For a period of four weeks, each bot published tweets about its subject and followed people with similar interests. In the final two weeks, our experiment showed that 437 unique users could have been phished, 33 of which visited our website through the network of an organization. Without revealing any sensitive or real data, the paper analyses some findings of this experiment and addresses further plans for research in this area.
Article
This article is based on a much longer paper published in German in Ernst Forsthoff and Reinhard Horstel (Eds.) Standorte im Zeitstrom: Festschrift fur Arnold Gehlen. Zum 70. Geburtstag am 29.1.1974. Frankfurt am Main: Athenaum, 1974. The longer version documents in detail (33 tables) the results of surveys conducted to test the propositions contained in the five hypotheses presented in this article. The propositions are confirmed or refuted, or they are tentatively supported by the data, or they await further testing. Research is being continued. A complete English translation of the paper is available to interested scholars upon request.
Social Bots Distort The
  • A Bessi
  • E Ferrara
Bessi, A., & Ferrara, E. (2016). Social Bots Distort The 2016 U.S. Presidental Election. First Monday, 21(11).
That smiling LinkedIn profile face might be a computer-generated fake. National Public Radio
  • S Bond
Bond, S. (2022, March 27). That smiling LinkedIn profile face might be a computer-generated fake. National Public Radio. https://www.npr.org/2022/03/27/1088140809/fakelinkedin-profiles?t=1654174474533
Strategies and influence of social bots in a 2017 German state election-A case study on twitter
  • F Brachten
  • S Stieglitz
  • L Hofeditz
  • K Kloppenborg
  • A Reimann
Brachten, F., Stieglitz, S., Hofeditz, L., Kloppenborg, K., & Reimann, A. (2017). Strategies and influence of social bots in a 2017 German state election-A case study on twitter. ACIS 2017 Proceedings.
Language models are few-shot learners
  • T Brown
  • B Mann
  • N Ryder
  • M Subbiah
  • J D Kaplan
  • P Dhariwal
  • A Neelakantan
  • P Shyam
  • G Sastry
  • A Askell
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., & others. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
Algorithmic Processes of Social Alertness and Social Transmission: How Bots Disseminate Information on Twitter
  • C A Salge
  • L De
  • E Karahanna
  • J B Thatcher
Salge, C. A. de L., Karahanna, E., & Thatcher, J. B. (2022). Algorithmic Processes of Social Alertness and Social Transmission: How Bots Disseminate Information on Twitter. MIS Quarterly, 46(1).
Analysis of the Pro-China Propaganda Network Targeting International Narratives
  • B Strick
Strick, B. (2021). Analysis of the Pro-China Propaganda Network Targeting International Narratives. Center for Information Resilience. https://www.info-res.org/post/revealedcoordinated-attempt-to-push-pro-china-antiwestern-narratives-on-social-media