Ziv Epstein's research while affiliated with Massachusetts Institute of Technology and other places

Publications (44)

Preprint
With the rise of generative AI, there has been a recent push for disclosing if content is produced by AI. However, it is not clear what the right term(s) are to use for such labels. In this paper, we investigate how the public understands the mapping between nine potential labeling terms (selected through consultation with academics, technology com...
Article
Full-text available
The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 34,286; 676,605 observations) to investigate predictors of susceptibility to misinformation about COVID-19, and interventions to combat the spread of this misinformation. In every c...
Article
Understanding shifts in creative work will help guide AI's impact on the media ecosystem.
Preprint
Full-text available
A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creati...
Article
Full-text available
There is widespread concern about misinformation circulating on social media. In particular, many argue that the context of social media itself may make people susceptible to the influence of false claims. Here, we test that claim by asking whether simply considering sharing news on social media reduces the extent to which people discriminate truth...
Preprint
Full-text available
Text-to-image generative models have recently exploded in popularity and accessibility. Yet so far, use of these models in creative tasks that bridge the 2D digital world and the creation of physical artefacts has been understudied. We conduct a pilot study to investigate if and how text-to-image models can be used to assist in upstream tasks withi...
Preprint
Full-text available
The ability to discern between true and false information is essential to making sound decisions. However, with the recent increase in AI-based disinformation campaigns, it has become critical to understand the influence of deceptive systems on human information processing. In experiment (N=128), we investigated how susceptible people are to decept...
Preprint
Full-text available
Modern computational systems have an unprecedented ability to detect, leverage and influence human attention. Prior work identified user engagement and dwell time as two key metrics of attention in digital environments, but these metrics have yet to be integrated into a unified model that can advance the theory andpractice of digital attention. We...
Preprint
Full-text available
Existing social media platforms (SMPs) make it incredibly difficult for researchers to conduct studies on social media, which in turn has created a knowledge gap between academia and industry about the effects of platform design on user behavior. To close the gap, we introduce Yourfeed, a research tool for conducting ecologically valid social media...
Preprint
Full-text available
Unlike traditional media, social media typically provides quantified metrics of how many users have engaged with each piece of content. Some have argued that the presence of these cues promotes the spread of misinformation. Here we investigate the causal effect of social cues on users' engagement with social media posts. We conducted an experiment...
Preprint
Full-text available
Recent breakthroughs in AI-generated music open the door for new forms for co-creation and co-creativity. We present Artificial.fm, a proof-of-concept casual creator that blends AI-music generation, subjective ratings, and personalized recommendation for the creation and curation of AI-generated music. Listeners can rate emergent songs to steer the...
Preprint
Full-text available
Generative AI techniques like those that synthesize images from text (text-to-image models) offer new possibilities for creatively imagining new ideas. We investigate the capabilities of these models to help communities engage in conversations about their collective future. In particular, we design and deploy a facilitated experience where particip...
Article
Social media platforms are increasingly deploying complex interventions to help users detect false news. Labeling false news using techniques that combine crowd-sourcing with artificial intelligence (AI) offers a promising way to inform users about potentially low-quality information without censoring content, but also can be hard for users to unde...
Preprint
The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 33,480) to investigate predictors of susceptibility to misinformation and interventions to combat misinformation. In every country, participants with a more analytic cognitive style...
Article
Full-text available
Significance The recent emergence of deepfake videos raises theoretical and practical questions. Are humans or the leading machine learning model more capable of detecting algorithmic visual manipulations of videos? How should content moderation systems be designed to detect and flag video-based misinformation? We present data showing that ordinary...
Preprint
Full-text available
Social media platforms are increasingly deploying complex interventions to help users detect false news. Labeling false news using techniques that combine crowd-sourcing with artificial intelligence (AI) offers a promising way to inform users about potentially low-quality information without censoring content, but also can be hard for users to unde...
Article
Full-text available
It has been widely argued that social media users with low digital literacy—who lack fluency with basic technological concepts related to the internet—are more likely to fall for online misinformation, but surprisingly little research has examined this association empirically. In a large survey experiment involving true and false news posts about p...
Preprint
There is widespread concern about fake news and other misinformation circulating on social media. In particular, many argue that the context of social media itself may make people particularly susceptible to the influence of false claims. Here, we test that claim by asking whether simply considering whether to share news on social media reduces peo...
Article
How does the visual design of digital platforms impact user behavior and the resulting environment? A body of work suggests that introducing social signals to content can increase both the inequality and unpredictability of its success, but has only been shown in the context of music listening. To further examine the effect of social influence on m...
Article
Technologies for manipulating and faking online media may outpace people's ability to tell the difference.
Preprint
It has been widely argued that social media users with low digital literacy – who lack fluency with basic technological concepts related to the internet – are more likely to fall for online misinformation, but surprisingly little research has examined this association empirically. In a large survey experiment involving true and false news posts abo...
Preprint
Full-text available
How does the visual design of digital platforms impact user behavior and the resulting environment? A body of work suggests that introducing social signals to content can increase both the inequality and unpredictability of its success, but has only been shown in the context of music listening. To further examine the effect of social influence on m...
Article
Full-text available
Recent research suggests that shifting users’ attention to accuracy increases the quality of news they subsequently share online. Here we help develop this initial observation into a suite of deploy-able interventions for practitioners. We ask (i) how prior results generalize to other approaches for prompting users to consider accuracy, and (ii) fo...
Preprint
Full-text available
The recent emergence of deepfake videos leads to an important societal question: how can we know if a video that we watch is real or fake? In three online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary participants against the leadin...
Article
Full-text available
In recent years, there has been a great deal of concern about the proliferation of false and misleading news on social media1–4. Academics and practitioners alike have asked why people share such misinformation, and sought solutions to reduce the sharing of misinformation5–7. Here, we attempt to address both of these questions. First, we find that...
Preprint
Recent research suggests that shifting users’ attention to accuracy increases the quality of news they subsequently share online. Here we help develop this initial observation into a suite of deployable interventions for practitioners. We ask (i) how prior results generalize to other approaches for prompting users to consider accuracy, and (ii) for...
Article
Full-text available
The recent sale of an AI-generated portrait for $432,000 at Christie’s art auction has raised questions about how credit and responsibility should be allocated to individuals involved, and how the anthropomorphic perception of the AI system contributed to the artwork’s success. Here, we identify natural heterogeneity in the extent to which differen...
Preprint
Full-text available
The latent space modeled by generative adversarial networks (GANs) represents a large possibility space. By interpolating categories generated by GANs, it is possible to create novel hybrid images. We present "Meet the Ganimals," a casual creator built on interpolations of BigGAN that can generate novel, hybrid animals called ganimals by efficientl...
Article
2020 Owner/Author. How can social media platforms fight the spread of misinformation? One possibility is to use newsfeed algorithms to downrank content from sources that users rate as untrustworthy. But will laypeople be handicapped by motivated reasoning or lack of expertise, and thus unable to identify misinformation sites? And will they "game" t...
Preprint
Full-text available
The spread of false and misleading news on social media is of great societal concern. Why do people share such content, and what can be done about it? In a first survey experiment (N=1,015), we demonstrate a dissociation between accuracy judgments and sharing intentions: even though true headlines are rated as much more accurate than false headline...
Article
The human willingness to pay costs to benefit anonymous others is often explained by social preferences: rather than only valuing their own material payoff, people also include the payoffs of others in their utility function. But how successful is this concept of outcome-based social preferences for actually predicting out-of-sample behavior? We in...
Preprint
Recent advances in neural networks for content generation enable artificial intelligence (AI) models to generate high-quality media manipulations. Here we report on a randomized experiment designed to study the effect of exposure to media manipulations on over 15,000 individuals' ability to discern machine-manipulated media. We engineer a neural ne...
Preprint
The "small world phenomenon," popularized by Stanley Milgram, suggests that individuals from across a social network are connected via a short path of mutual friends and can leverage their local social information to efficiently traverse that network. Existing social search experiments are plagued by high rates of attrition, which prohibit comprehe...
Conference Paper
Full-text available
We introduce TuringBox, a platform to democratize the study of AI. On one side of the platform, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks to evaluate and characterize the outputs of algorithms. We outline the architecture o...
Article
Full-text available
AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method - specifically hypothesis testing - in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused q...
Conference Paper
The human willingness to pay costs to benefit anonymous others is often explained by social preferences: rather than only valuing their own material payoff, people also care in some fashion about the outcomes of others. But how successful is this concept of outcome-based social preferences for actually predicting out-of-sample behavior? We investig...
Article
Full-text available
When faced with the chance to help someone in mortal danger, what is our first response? Do we leap into action, only later considering the risks to ourselves? Or must instinctive self-preservation be overcome by will-power in order to act? We investigate this question by examining the testimony of Carnegie Hero Medal Recipients (CHMRs), extreme al...

Citations

... The identification and refusal of misinformation have become key challenges on social media (Arechar et al., 2023). Thus, it is critical for users to engage in critical appraisals and discern between accurate and misleading information. ...
... These models have found significant applications across various domains, including natural language processing, computer vision, and brain imaging [10,11]. In the creative arts, generative AI tools have been instrumental in producing high-quality artistic media, including visual arts, music, literature, video, and animation [12]. We believe that Sustainability 2023, 15, 13595 3 of 25 the generative capabilities of these AI tools are fundamentally altering creative processes, leading to a reimagining of creativity across many sectors of society, including education. ...
... Relying on programmatic creativity -a method that involves the utilization of AI for automating ad creation (Bakpayev et al. 2022) -AI can be used to interpret 'big data to select creative elements and arrange them in a creative format' (Chen et al. 2019: 350). Similarly, AI in advertising can be used to develop or facilitate the ideation of advertising assets (Smith et al. 2023). For example, Chase Bank worked with the AI offered by Persado to create copy for their marketing, which consumers deemed to be more personable. ...
... The premise of Screenomics Project is to capture recordings of individual user screens via a sequence of screenshots, including that of social media use. A software is then used to extract text and images, making the screenshots a searchable database (see also Epstein & Lin, 2022). ...
... As a consequence, users might consider carefully if they want to share such information. Recent research suggests that positive social cues facilitate sharing of information more when it is true rather than false 48 . Future research may investigate if, conversely, users also avoid sharing information when social cues are negative, and whether this occurs out of fear of backlash, or out of intrinsic hesitation to spread potential falsehoods. ...
... There is no clear relationship between conspiracy theories and behavioral motivation (Altay, Berriche, and Acerbi 2023;Acerbi 2019b;Petersen 2020;Williams 2022;Mercier 2020). Instead, propagandists mainly use what is already available in the misinformation atmosphere to motivate people to a specific act (Arechar et al. 2022;Petersen 2020;Acerbi 2019a;2019b;Marie and Petersen 2022;Tangherlini et al. 2020;Green et al. 2023). ...
... Mathematical models of crowdsourcing and human computation today largely assume small modular tasks, "computational primitives" such as labels, comparisons, or votes requiring little coordination [10,29]. These have been used to understand how crowds can be organized to advance scientific discoveries, mobilize collective action, and contribute to participatory governance [15,17,33,[35][36][37]44]. ...
... Effron and Raj (2020) found in their research that familiarity with news headlines is more likely to result in them being shared on social media regardless of their veracity. However, attempts to measure sharing intentions of fake news are hypothetical (Sirlin et al., 2021). Since the use of Illusionary Truth Effect theory in the analysis of fake news is relatively new, it should be noted that there are gaps in the current body of research. ...
... Fact-checking by crowdsourcing exploits the wisdom of crowds. It is reported that their judgment has a high correlation with professional judgment in fact-checking (Epstein, Pennycook, and Rand 2020), although not yet as accurate as professionals (Godel et al. 2021). While these methods are promising to combat the infodemic, machine-learning-based models are known to be highly context-dependent (Bang et al. 2021); they tend to perform poorly for newly circulating false information, and crowdsourcing requires a significant amount of time and cost if the crowd needs to verify a large amount of information. ...