Article

One shot intervention reduces online engagement with distorted content

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Depression is one of the leading causes of disability worldwide. Individuals with depression often experience unrealistic and overly negative thoughts, i.e. cognitive distortions, that cause maladaptive behaviors and feelings. Now that a majority of the US population uses social media platforms, concerns have been raised that they may serve as a vector for the spread of distorted ideas and thinking amid a global mental health epidemic. Here, we study how individuals (N=838) interact with distorted content on social media platforms using a simulated environment similar to Twitter (now X). We find that individuals with higher depression symptoms tend to prefer distorted content more than those with fewer symptoms. However, a simple one-shot intervention can teach individuals to recognize and drastically reduce interactions with distorted content across the entire depression scale. This suggests that distorted thinking on social media may disproportionally affect individuals with depression, but simple awareness training can mitigate this effect. Our findings have important implications for understanding the role of social media in propagating distorted thinking and potential paths to reduce the societal cost of mental health disorders.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The integration of large language models (LLMs) into mental healthcare and research heralds a potentially transformative shift, one offering enhanced access to care, efficient data collection, and innovative therapeutic tools. This paper reviews the development, function, and burgeoning use of LLMs in psychiatry, highlighting their potential to enhance mental healthcare through improved diagnostic accuracy, personalized care, and streamlined administrative processes. It is also acknowledged that LLMs introduce challenges related to computational demands, potential for misinterpretation, and ethical concerns, necessitating the development of pragmatic frameworks to ensure their safe deployment. We explore both the promise of LLMs in enriching psychiatric care and research through examples such as predictive analytics and therapy chatbots and risks including labor substitution, privacy concerns, and the necessity for responsible AI practices. We conclude by advocating for processes to develop responsible guardrails, including red-teaming, multi-stakeholder-oriented safety, and ethical guidelines/frameworks, to mitigate risks and harness the full potential of LLMs for advancing mental health.
Article
Full-text available
The current studies aimed to find out whether a nonintentional form of mood contagion exists and which mechanisms can account for it. In these experiments participants who expected to be tested for text comprehension listened to an affectively neutral speech that was spoken in a slightly sad or happy voice. The authors found that (a) the emotional expression induced a congruent mood state in the listeners, (b) inferential accounts to emotional sharing were not easily reconciled with the findings, (c) different affective experiences emerged from intentional and nonintentional forms of emotional sharing, and (d) findings suggest that a perception–behavior link (T. L. Chartrand & J. A. Bargh, 1999) can account for these findings, because participants who were required to repeat the philosophical speech spontaneously imitated the target person's vocal expression of emotion.
Article
Full-text available
Large Language Models (LLMs) have recently gathered attention with the release of ChatGPT, a user-centered chatbot released by OpenAI. In this perspective article, we retrace the evolution of LLMs to understand the revolution brought by ChatGPT in the artificial intelligence (AI) field. The opportunities offered by LLMs in supporting scientific research are multiple and various models have already been tested in Natural Language Processing (NLP) tasks in this domain. The impact of ChatGPT has been huge for the general public and the research community, with many authors using the chatbot to write part of their articles and some papers even listing ChatGPT as an author. Alarming ethical and practical challenges emerge from the use of LLMs, particularly in the medical field for the potential impact on public health. Infodemic is a trending topic in public health and the ability of LLMs to rapidly produce vast amounts of text could leverage misinformation spread at an unprecedented scale, this could create an “AI-driven infodemic,” a novel public health threat. Policies to contrast this phenomenon need to be rapidly elaborated, the inability to accurately detect artificial-intelligence-produced text is an unresolved issue.
Article
Full-text available
Background: The Common Elements Toolbox (COMET) is an unguided digital single-session intervention (SSI) based on principles of cognitive behavioral therapy and positive psychology. Although unguided digital SSIs have shown promise in the treatment of youth psychopathology, the data are more mixed regarding their efficacy in adults. Objective: This study aimed to investigate the efficacy of COMET-SSI versus a waiting list control in depression and other transdiagnostic mental health outcomes for Prolific participants with a history of psychopathology. Methods: We conducted an investigator-blinded, preregistered randomized controlled trial comparing COMET-SSI (n=409) with an 8-week waiting list control (n=419). Participants were recruited from the web-based workspace Prolific and assessed for depression, anxiety, work and social functioning, psychological well-being, and emotion regulation at baseline and at 2, 4, and 8 weeks after the intervention. The main outcomes were short-term (2 weeks) and long-term (8 weeks) changes in depression and anxiety. The secondary outcomes were the 8-week changes in work and social functioning, well-being, and emotion regulation. Analyses were conducted according to the intent-to-treat principle with imputation, without imputation, and using a per-protocol sample. In addition, we conducted sensitivity analyses to identify inattentive responders. Results: The sample comprised 61.9% (513/828) of women, with a mean age of 35.75 (SD 11.93) years. Most participants (732/828, 88.3%) met the criteria for screening for depression or anxiety using at least one validated screening scale. A review of the text data suggested that adherence to the COMET-SSI was near perfect, there were very few inattentive respondents, and satisfaction with the intervention was high. However, despite being powered to detect small effects, there were negligible differences between the conditions in the various outcomes at the various time points, even when focusing on subsets of individuals with more severe symptoms. Conclusions: Our results do not support the use of the COMET-SSI in adult Prolific participants. Future work should explore alternate ways of intervening with paid web-based participants, including matching individuals to SSIs they may be most responsive to. Trial registration: ClinicalTrials.gov NCT05379881, https://clinicaltrials.gov/ct2/show/NCT05379881.
Article
Full-text available
Emotions are an important element of human interactions, including those in social media. Despite the prominence of text-based messages in online communication, little is known about how the emotional tone of the messages affects the emotions of the recipients. With three experiments, the current study investigated these effects in the context of online news discussions. Participants first read news discussion threads with a negative, neutral, and positive tone, and then rated their subjective emotional state in terms of valence and arousal. Results showed that negatively toned discussions induced more negative valence and higher arousal ratings than the other conditions. Positive discussions had an opposite effect. Emotionally toned online comments evidently affect the quality and the intensity of the readers’ subjective emotions. We discuss implications for future research and emotion mitigation strategies to improve online discussion quality.
Article
Full-text available
General Audience Summary Helping students learn to think critically is an important priority across education levels, but there is currently no clear consensus on how to teach critical thinking. Many proposed teaching strategies are very elaborate and involve extensive discussion and analysis, which makes it difficult for teachers to incorporate critical thinking instruction into their existing curriculum. In our current research, we explore whether a more focused approach might improve students’ critical thinking performance. Specifically, we examine an inductive approach to critical thinking instruction. Inductive learning, commonly known as learning by example, happens when a person learns a general pattern from multiple examples of the pattern. In this case, we ask whether participants can learn to identify illogical or biased claims, a common measure of critical thinking ability, by presenting them with multiple examples of different kinds of fallacies and biases. We presented participants with sets of example scenarios in which an individual makes a claim based on an observation, and participants categorized which fallacy or bias, if any, the individual in the scenario was committing. In two studies, we find that this critical thinking categorization practice improves performance on a delayed open-ended critical thinking assessment. These exciting results show that critical thinking skills can be taught using a well established, specific, psychologically grounded method, and these results come at a critical juncture in our society. In a time of fake news, misinformation and a lack of critical thinking skills among the populace can have dire consequences. We believe our current results are of broad interest to educators and applied cognitive psychologists, and show promise for improving peoples’ defenses against misinformation.
Article
Full-text available
Much like a viral contagion, misinformation can spread rapidly from one individual to another. Inoculation theory offers a logical basis for developing a psychological "vaccine" against misinformation. We discuss the origins of inoculation theory, starting with its roots in the 1960s as a "vaccine for brainwash," and detail the major theoretical and practical innovations that inoculation research has witnessed over the years. Specifically, we review a series of randomized lab and field studies that show that it is possible to preemptively "immu-nize" people against misinformation by preexposing them to severely weakened doses of the techniques that underlie its production along with ways on how to spot and refute them. We review evidence from interventions that we developed with governments and social media companies to help citizens around the world recognize and resist unwanted attempts to influence and mislead. We conclude with a discussion of important open questions about the effectiveness of inoculation interventions.
Article
Full-text available
The Covid-19 physical distancing measures had a detrimental effect on adolescents' mental health. Adolescents worldwide alleviated the negative experiences of social distancing by spending more time on digital devices. Through a systematic literature search in eight academic databases (including Eric, Proquest Sociology, Communication & Mass Media Complete, Psychology and Behavioral Sciences Collection, PsycINFO, CINAHL, Pubmed, and Web of Science), the present systematic review and meta-analysis first summarized the existing evidence from 30 studies, published up to September 2021, on the link between mental health and digital media use in adolescents during Covid-19. Digital media use measures included social media, screen time, and digital media addiction. Mental health measures were grouped into conceptually similar dimensions, such as well-being, ill-being, social well-being, lifestyle habits, and Covid-19-related stress. Results showed that, although most studies reported a positive association between ill-being and social media use (r = 0.171, p = 0.011) and ill-being and media addiction (r = 0.434, p = 0.024), not all types of digital media use had adverse consequences on adolescents' mental health. In particular, one-to-one communication, self-disclosure in the context of mutual online friendship, as well as positive and funny online experiences mitigated feelings of loneliness and stress. Hence, these positive aspects of online activities should be promoted. At the same time, awareness of the detrimental effects of addictive digital media use should be raised: That would include making adolescents more aware of adverse mechanisms such as social comparison, fear of missing out, and exposure to negative contents, which were more likely to happen during social isolation and confinement due to the pandemic.
Article
Full-text available
Significance The role of social media in political discourse has been the topic of intense scholarly and public debate. Politicians and commentators from all sides allege that Twitter’s algorithms amplify their opponents’ voices, or silence theirs. Policy makers and researchers have thus called for increased transparency on how algorithms influence exposure to political content on the platform. Based on a massive-scale experiment involving millions of Twitter users, a fine-grained analysis of political parties in seven countries, and 6.2 million news articles shared in the United States, this study carries out the most comprehensive audit of an algorithmic recommender system and its effects on political content. Results unveil that the political right enjoys higher amplification compared to the political left.
Article
Full-text available
Background: Detection of depression gained prominence soon after this troublesome disease emerged as a serious public health concern worldwide. Objective: This systematic review aims to summarize the findings of previous studies concerning applying machine learning (ML) methods to text data from social media to detect depressive symptoms and to suggest directions for future research in this area. Methods: A bibliographic search was conducted for the period of January 1990 to December 2020 in Google Scholar, PubMed, Medline, ERIC, PsycINFO, and BioMed. Two reviewers retrieved and independently assessed the 418 studies consisting of 322 articles identified through database searching and 96 articles identified through other sources; 17 of the studies met the criteria for inclusion. Results: Of the 17 studies, 10 had identified depression based on researcher-inferred mental status, 5 had identified it based on users’ own descriptions of their mental status, and 2 were identified based on community membership. The ML approaches of 13 of the 17 studies were supervised learning approaches, while 3 used unsupervised learning approaches; the remaining 1 study did not describe its ML approach. Challenges in areas such as sampling, optimization of approaches to prediction and their features, generalizability, privacy, and other ethical issues call for further research. Conclusions: ML approaches applied to text data from users on social media can work effectively in depression detection and could serve as complementary tools in public mental health practice.
Article
Full-text available
Despite heightened awareness of the detrimental impact of hate speech on social media platforms on affected communities and public discourse, there is little consensus on approaches to mitigate it. While content moderation—either by governments or social media companies—can curb online hostility, such policies may suppress valuable as well as illicit speech and might disperse rather than reduce hate speech. As an alternative strategy, an increasing number of international and nongovernmental organizations (I/NGOs) are employing counterspeech to confront and reduce online hate speech. Despite their growing popularity, there is scant experimental evidence on the effectiveness and design of counterspeech strategies (in the public domain). Modeling our interventions on current I/NGO practice, we randomly assign English-speaking Twitter users who have sent messages containing xenophobic (or racist) hate speech to one of three counterspeech strategies—empathy, warning of consequences, and humor—or a control group. Our intention-to-treat analysis of 1,350 Twitter users shows that empathy-based counterspeech messages can increase the retrospective deletion of xenophobic hate speech by 0.2 SD and reduce the prospective creation of xenophobic hate speech over a 4-wk follow-up period by 0.1 SD. We find, however, no consistent effects for strategies using humor or warning of consequences. Together, these results advance our understanding of the central role of empathy in reducing exclusionary behavior and inform the design of future counterspeech interventions.
Article
Full-text available
The question of whether screen time, particularly time spent with social media and smartphones, influences mental health outcomes remains a topic of considerable debate among policy makers, the public, and scholars. Some scholars have argued passionately that screen media may be contributing to an increase in poor psychosocial functioning and risk of suicide, particularly among teens. Other scholars contend that the evidence is not yet sufficient to support such a dramatic conclusion. The current meta-analysis included 37 effect sizes from 33 separate studies. To consider the most recent research, all studies analyzed were published between 2015 and 2019. Across studies, evidence suggests that screen media plays little role in mental health concerns. In particular, there was no evidence that screen media contribute to suicidal ideation or other mental health outcomes. This result was also true when investigating smartphones or social media specifically. Overall, as has been the case for previous media such as video games, concerns about screen time and mental health are not based in reliable data.
Article
Full-text available
Significance Can entire societies become more or less depressed over time? Here, we look for the historical traces of cognitive distortions, thinking patterns that are strongly associated with internalizing disorders such as depression and anxiety, in millions of books published over the course of the last two centuries in English, Spanish, and German. We find a pronounced “hockey stick” pattern: Over the past two decades the textual analogs of cognitive distortions surged well above historical levels, including those of World War I and II, after declining or stabilizing for most of the 20th century. Our results point to the possibility that recent socioeconomic changes, new technology, and social media are associated with a surge of cognitive distortions.
Article
Full-text available
In recent years, there has been a great deal of concern about the proliferation of false and misleading news on social media1, 2, 3–4. Academics and practitioners alike have asked why people share such misinformation, and sought solutions to reduce the sharing of misinformation5, 6–7. Here, we attempt to address both of these questions. First, we find that the veracity of headlines has little effect on sharing intentions, despite having a large effect on judgments of accuracy. This dissociation suggests that sharing does not necessarily indicate belief. Nonetheless, most participants say it is important to share only accurate news. To shed light on this apparent contradiction, we carried out four survey experiments and a field experiment on Twitter; the results show that subtly shifting attention to accuracy increases the quality of news that people subsequently share. Together with additional computational analyses, these findings indicate that people often share misinformation because their attention is focused on factors other than accuracy—and therefore they fail to implement a strongly held preference for accurate sharing. Our results challenge the popular claim that people value partisanship over accuracy8,9, and provide evidence for scalable attention-based interventions that social media platforms could easily implement to counter misinformation online.
Article
Full-text available
There has been increasing concern with the growing infusion of misinformation, or “fake news”, into public discourse and politics in many western democracies. Our article first briefly reviews the current state of the literature on conventional countermeasures to misinformation. We then explore proactive measures to prevent misinformation from finding traction in the first place that is based on the psychological theory of “inoculation”. Inoculation rests on the idea that if people are forewarned that they might be misinformed and are exposed to weakened examples of the ways in which they might be misled, they will become more immune to misinformation. We review a number of techniques that can boost people’s resilience to misinformation, ranging from general warnings to more specific instructions about misleading (rhetorical) techniques. We show that based on the available evidence, inoculation appears to be a promising avenue to help protect people from misinformation and “fake news”.
Article
Full-text available
Depression is a leading cause of disability worldwide, but is often underdiagnosed and undertreated. Cognitive behavioural therapy holds that individuals with depression exhibit distorted modes of thinking, that is, cognitive distortions, that can negatively affect their emotions and motivation. Here, we show that the language of individuals with a self-reported diagnosis of depression on social media is characterized by higher levels of distorted thinking compared with a random sample. This effect is specific to the distorted nature of the expression and cannot be explained by the presence of specific topics, sentiment or first-person pronouns. This study identifies online language patterns that are indicative of depression-related distorted thinking. We caution that any future applications of this research should carefully consider ethical and data privacy issues.
Article
Full-text available
This study estimates empirically derived guidelines for effect size interpretation for research in social psychology overall and sub‐disciplines within social psychology, based on analysis of the true distributions of the two types of effect size measures widely used in social psychology (correlation coefficient and standardized mean differences). Analysis of empirically derived distributions of 12,170 correlation coefficients and 6,447 Cohen's d statistics extracted from studies included in 134 published meta‐analyses revealed that the 25th, 50th, and 75th percentiles corresponded to correlation coefficient values of 0.12, 0.24, and 0.41 and to Cohen's d values of 0.15, 0.36, and 0.65 respectively. The analysis suggests that the widely used Cohen's guidelines tend to overestimate medium and large effect sizes. Empirically derived effect size distributions in social psychology overall and its sub‐disciplines can be used both for effect size interpretation and for sample size planning when other information about effect size is not available.
Article
Full-text available
The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions. Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximize commercial interests, capture and sustain users’ attention, monetize user data, and predict and influence future behavior. This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. Benevolent choice architects working with regulators may curb the worst excesses of manipulative choice architectures, yet the strategic advantages, resources, and data remain with commercial players. One way to address some of this imbalance is with interventions that empower Internet users to gain some control over their digital environments, in part by boosting their information literacy and their cognitive resistance to manipulation. Our goal is to present a conceptual map of interventions that are based on insights from psychological science. We begin by systematically outlining how online and offline environments differ despite being increasingly inextricable. We then identify four major types of challenges that users encounter in online environments: persuasive and manipulative choice architectures, AI-assisted information architectures, false and misleading information, and distracting environments. Next, we turn to how psychological science can inform interventions to counteract these challenges of the digital world. After distinguishing among three types of behavioral and cognitive interventions—nudges, technocognition, and boosts—we focus on boosts, of which we identify two main groups: (a) those aimed at enhancing people’s agency in their digital environments (e.g., self-nudging, deliberate ignorance) and (b) those aimed at boosting competencies of reasoning and resilience to manipulation (e.g., simple decision aids, inoculation). These cognitive tools are designed to foster the civility of online discourse and protect reason and human autonomy against manipulative choice architectures, attention-grabbing techniques, and the spread of false information.
Article
Full-text available
Within a relatively short time span, social media have transformed the way humans interact, leading many to wonder what, if any, implications this interactive revolution has had for people’s emotional lives. Over the past 15 years, an explosion of research has examined this issue, generating countless studies and heated debate. Although early research generated inconclusive findings, several experiments have revealed small negative effects of social media use on well-being. These results mask, however, a deeper set of complexities. Accumulating evidence indicates that social media can enhance or diminish well-being depending on how people use them. Future research is needed to model these complexities using stronger methods to advance knowledge in this domain.
Article
Full-text available
Abstract Human sleep/wake cycles follow a stable circadian rhythm associated with hormonal, emotional, and cognitive changes. Changes of this cycle are implicated in many mental health concerns. In fact, the bidirectional relation between major depressive disorder and sleep has been well-documented. Despite a clear link between sleep disturbances and subsequent disturbances in mood, it is difficult to determine from self-reported data which specific changes of the sleep/wake cycle play the most important role in this association. Here we observe marked changes of activity cycles in millions of twitter posts of 688 subjects who explicitly stated in unequivocal terms that they had received a (clinical) diagnosis of depression as compared to the activity cycles of a large control group (n = 8791). Rather than a phase-shift, as reported in other work, we find significant changes of activity levels in the evening and before dawn. Compared to the control group, depressed subjects were significantly more active from 7 PM to midnight and less active from 3 to 6 AM. Content analysis of tweets revealed a steady rise in rumination and emotional content from midnight to dawn among depressed individuals. These results suggest that diagnosis and treatment of depression may focus on modifying the timing of activity, reducing rumination, and decreasing social media use at specific hours of the day.
Article
Full-text available
This study investigates the long-term effectiveness of active psychological inoculation as a means to build resistance against misinformation. Using 3 longitudinal experiments (2 preregistered), we tested the effectiveness of Bad News, a real-world intervention in which participants develop resistance against misinformation through exposure to weakened doses of misinformation techniques. In 3 experiments (N Exp1 ϭ 151, N Exp2 ϭ 194, N Exp3 ϭ 170), participants played either Bad News (inoculation group) or Tetris (gamified control group) and rated the reliability of news headlines that either used a misinfor-mation technique or not. We found that participants rate fake news as significantly less reliable after the intervention. In Experiment 1, we assessed participants at regular intervals to explore the longevity of this effect and found that the inoculation effect remains stable for at least 3 months. In Experiment 2, we sought to replicate these findings without regular testing and found significant decay over a 2-month time period so that the long-term inoculation effect was no longer significant. In Experiment 3, we replicated the inoculation effect and investigated whether long-term effects could be due to item-response mem-orization or the fake-to-real ratio of items presented, but found that this is not the case. We discuss implications for inoculation theory and psychological research on misinformation. Public Significance Statement This study shows that inoculation-based media and information literacy interventions such as the Bad News Game can confer protection against the influence of misinformation over time. With regular assessment, the positive effects can be maintained for at least 3 months. Without regular "boosting," the effects dissipate within 2 months.
Article
Full-text available
The purpose of this study is to understand the role of social media content on users' engagement behavior. More specifically, we investigate: (i) the direct effects of format and platform on users' passive and active engagement behavior, and (ii) we assess the moderating effect of content context on the link between each content type (rational, emotional, and transactional content) and users' engagement. The dataset contained 1,038 social media posts and 1,336,741 and 95,996 fan likes and comments, respectively based on Facebook and Instagram. The results reveal that the effectiveness of social media content on users' engagement is moderated by content context. The findings contribute to understanding engagement and users' experience with social media. This study is a pioneering one to empirically assess the construct of social media engagement behavior through the effects of content types and content contexts on a dual social media platform.
Article
Full-text available
This article investigates the factors that shape the circulation of political content on social media. We analyze an experiment embedded within a nationally representative survey of U.S. youth that randomly assigned participants to see a short post designed to resemble content that circulates through social media. The post was experimentally manipulated to vary in both its ideology and whether it contained factually inaccurate information. In general, we found that participants' intentions to circulate a post on social media were strongly influenced by whether that post aligned with their ideology, but not by whether it contained misinformation. The relative effects of ideological alignment and misinformation were found to differ according to participants' level of political knowledge and engagement, indicating that different groups of young people are susceptible to particular kinds of misinformation.
Article
Full-text available
This study is the first to scrutinize the psychological effects of online astroturfing in the context of Russia’s digitally enabled foreign propaganda. Online astroturfing is a communicative strategy that uses websites, “sock puppets,” or social bots to create the false impression that a particular opinion has widespread public support. We exposed N = 2353 subjects to pro-Russian astroturfing comments and tested: (1) their effects on political opinions and opinion certainty and (2) the efficiency of three inoculation strategies to prevent these effects. All effects were investigated across three issues and from a short- and long-term perspective. Results show that astroturfing comments can indeed alter recipients’ opinions, and increase uncertainty, even when subjects are inoculated before exposure. We found exclusively short-term effects of only one inoculation strategy (refutational-same). As these findings imply, preemptive media literacy campaigns should deploy (1) continuous rather than one-time efforts and (2) issue specific rather than abstract inoculation messages.
Article
Full-text available
Purpose of review Digital mental health (DMH) interventions provide opportunities to alleviate mental health disparities among marginalized populations by overcoming traditional barriers to care and putting quality mental health services in the palm of one’s hand. While progress has been made towards realizing this goal, the potential for impactful change has yet to be realized. This paper reviews current examples of DMH interventions for certain marginalized and underserved groups, namely, ethnic and racial minorities including Latinx and African-Americans, rural populations, individuals experiencing homelessness, and sexual and gender minorities. Recent findings Strengths and opportunities, along with the needs and considerations, of each group are discussed as they pertain to the development and dissemination of DMH interventions. Our review focuses on several DMH interventions that have been specifically designed for marginalized populations with a culturally sensitive approach along with other existing interventions that have been tailored to fit the needs of the target population. Overall, evidence is beginning to show promise for the feasibility and acceptability of DMH inter ventions for these groups, but large-scale efficacy testing and scaling potential are still lacking. Summary These examples of how DMH can potentially positively impact marginalized populations should motivate developers, researchers, and practitioners to work collaboratively with stakeholders to deliver DMH interventions to these underserved populations in need.
Article
Full-text available
Research on social media use during election campaigns has largely focused on Twitter. Building on recommendations from previous scholarship, the work presented here provides comparative insights into party and citizen engagement on several platforms-Facebook, Twitter, Instagram and YouTube-during the 2017 Norwegian elections. Results indicate that the themes of popular, "viral" posts vary across platforms, suggesting the need to adapt political messages to each specific outlet. The findings are discussed in light of the suggested "analytics turn"-when political actors can gauge the minutiae of how their online efforts are engaged with, how do those types of insights influence the shape and content of political campaigns?
Article
Full-text available
Significance Depression is disabling and treatable, but underdiagnosed. In this study, we show that the content shared by consenting users on Facebook can predict a future occurrence of depression in their medical records. Language predictive of depression includes references to typical symptoms, including sadness, loneliness, hostility, rumination, and increased self-reference. This study suggests that an analysis of social media data could be used to screen consenting individuals for depression. Further, social media content may point clinicians to specific symptoms of depression.
Article
Full-text available
Absolutist thinking is considered a cognitive distortion by most cognitive therapies for anxiety and depression. Yet, there is little empirical evidence of its prevalence or specificity. Across three studies, we conducted a text analysis of 63 Internet forums (over 6,400 members) using the Linguistic Inquiry and Word Count software to examine absolutism at the linguistic level. We predicted and found that anxiety, depression, and suicidal ideation forums contained more absolutist words than control forums (ds > 3.14). Suicidal ideation forums also contained more absolutist words than anxiety and depression forums (ds > 1.71). We show that these differences are more reflective of absolutist thinking than psychological distress. It is interesting that absolutist words tracked the severity of affective disorder forums more faithfully than negative emotion words. Finally, we found elevated levels of absolutist words in depression recovery forums. This suggests that absolutist thinking may be a vulnerability factor.
Article
Full-text available
Significance Twitter and other social media platforms are believed to have altered the course of numerous historical events, from the Arab Spring to the US presidential election. Online social networks have become a ubiquitous medium for discussing moral and political ideas. Nevertheless, the field of moral psychology has yet to investigate why some moral and political ideas spread more widely than others. Using a large sample of social media communications concerning polarizing issues in public policy debates (gun control, same-sex marriage, climate change), we found that the presence of moral-emotional language in political messages substantially increases their diffusion within (and less so between) ideological group boundaries. These findings offer insights into how moral ideas spread within networks during real political discussion.
Article
Full-text available
It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using `social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
Article
Full-text available
This paper reports on a landmark study to field-test the influence of a large retailer to change the behaviour of its millions of customers. Previous studies have suggested that social media interaction can influence behaviour. This study implemented three interventions with messages to encourage reductions in food waste. The first was a social influence intervention that used the retailer’s Facebook pages to encourage its customers to interact. Two additional information interventions were used as a comparison through the retailer’s print/digital magazine and e-newsletter. Three national surveys tracked customers’ self-reported food waste one month before as well as two weeks after and five months after the interventions. The control group included those who said they had not seen any of the interventions. The results were surprising and significant in that the social media and e-newsletter interventions as well as the control group all showed significant reductions in self-reported food waste by customers over the study period. Hence in this field study, social media does not seem to replicate enough of the effect of ‘face-to-face’ interaction shown in previous studies to change behaviour above other factors in the shopping setting. This may indicate that results from laboratory-based studies may over-emphasise the effect of social media interventions.
Article
Full-text available
Cognitive distortions are negative biases in thinking that are theorized to represent vulnerability factors for depression and dysphoria. Despite the emphasis placed on cognitive distortions in the context of cognitive behavioural theory and practice, a paucity of research has examined the mechanisms through which they impact depressive symptomatology. Both adaptive and maladaptive styles of humor represent coping strategies that may mediate the relation between cognitive distortions and depressive symptoms. The current study examined the correlations between the frequency and impact of cognitive distortions across both social and achievement-related contexts and types of humor. Cognitive distortions were associated with reduced use of adaptive Affiliative and Self-Enhancing humor styles and increased use of maladaptive Aggressive and Self-Defeating humor. Reduced use of Self-Enhancing humor mediated the relationship between most types of cognitive distortions and depressed mood, indicating that distorted negative thinking may interfere with an individual’s ability to adopt a humorous and cheerful outlook on life (i.e., use Self-Enhancing humor) as a way of regulating emotions and coping with stress, thereby resulting in elevated depressive symptoms. Similarly, Self-Defeating humor mediated the association of the social impact of cognitive distortions with depression, such that this humor style may be used as a coping strategy for dealing with distorted thinking that ultimately backfires and results in increased dysphoria.
Article
Full-text available
Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.
Article
Full-text available
Objective: To examine emotion regulation (ER) among individuals with high (HSA) and low social anxiety (LSA) and the effects of 1 week of practiced cognitive reappraisal using self-report, daily diary measures and lab tasks. Method: HSAs received reappraisal (HSA-R; n = 43) or monitoring (HSA-M; n = 40) instructions. LSAs received monitoring instructions (LSA-M; n = 41). Self-report measures of social anxiety and ER, and a lab task of reappraisal were administered at baseline and after 1 week. Daily diaries of anxiety and ER were also collected. Results: At baseline, HSAs compared with LSAs reported lower self-efficacy of reappraisal and higher frequency and self-efficacy of suppression, but no differences emerged in the reappraisal task. Following the intervention, the HSA-R compared with the HSA-M reported lower symptom severity, greater self-efficacy of reappraisal but equal daily anxiety. HSA-R used reappraisal mostly combined with suppression (74.76% of situations). Post hoc analyses demonstrated that clinical diagnosis, but not severity, moderated the intervention effect. Conclusions: The results demonstrate the efficacy of a short intervention in social anxiety, and provide additional areas of research for improving its treatment.
Article
Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.
Article
The inherent nature of social media content poses serious challenges to practical applications of sentiment analysis. We present VADER, a simple rule-based model for general sentiment analysis, and compare its effectiveness to eleven typical state-of-practice benchmarks including LIWC, ANEW, the General Inquirer, SentiWordNet, and machine learning oriented techniques relying on Naive Bayes, Maximum Entropy, and Support Vector Machine (SVM) algorithms. Using a combination of qualitative and quantitative methods, we first construct and empirically validate a gold-standard list of lexical features (along with their associated sentiment intensity measures) which are specifically attuned to sentiment in microblog-like contexts. We then combine these lexical features with consideration for five general rules that embody grammatical and syntactical conventions for expressing and emphasizing sentiment intensity. Interestingly, using our parsimonious rule-based model to assess the sentiment of tweets, we find that VADER outperforms individual human raters (F1 Classification Accuracy = 0.96 and 0.84, respectively), and generalizes more favorably across contexts than any of our benchmarks.
Article
Though prior studies have analyzed the textual characteristics of online comments about politics, less is known about how selection into commenting behavior and exposure to other people’s comments changes the tone and content of political discourse. This article makes three contributions. First, we show that frequent commenters on Facebook are more likely to be interested in politics, to have more polarized opinions, and to use toxic language in comments in an elicitation task. Second, we find that people who comment on articles in the real world use more toxic language on average than the public as a whole; levels of toxicity in comments scraped from media outlet Facebook pages greatly exceed what is observed in comments we elicit on the same articles from a nationally representative sample. Finally, we demonstrate experimentally that exposure to toxic language in comments increases the toxicity of subsequent comments.
Article
Does the consumption of ideologically congruent news on social media exacerbate polarization? I estimate the effects of social media news exposure by conducting a large field experiment randomly offering participants subscriptions to conservative or liberal news outlets on Facebook. I collect data on the causal chain of media effects: subscriptions to outlets, exposure to news on Facebook, visits to online news sites, and sharing of posts, as well as changes in political opinions and attitudes. Four main findings emerge. First, random variation in exposure to news on social media substantially affects the slant of news sites that individuals visit. Second, exposure to counter-attitudinal news decreases negative attitudes toward the opposing political party. Third, in contrast to the effect on attitudes, I find no evidence that the political leanings of news outlets affect political opinions. Fourth, Facebook’s algorithm is less likely to supply individuals with posts from counter-attitudinal outlets, conditional on individuals subscribing to them. Together, the results suggest that social media algorithms may limit exposure to counter-attitudinal news and thus increase polarization. (JEL C93, D72, L82)
Article
Background Depression is a leading cause of disability. International guidelines recommend screening for depression and the Patient Health Questionnaire 9 (PHQ-9) has been identified as the most reliable screening tool. We reviewed the evidence for using it within the primary care setting. Methods We retrieved studies from MEDLINE, Embase, PsycINFO, CINAHL and the Cochrane Library that carried out primary care-based depression screening using PHQ-9 in populations older than 12, from 1995 to 2018. Results Forty-two studies were included in the systematic review. Most of the studies were cross-sectional (N=40, 95%), conducted in high-income countries (N=27, 71%) and recruited adult populations (N=38, 90%). The accuracy of the PHQ-9 was evaluated in 31 (74%) studies with a two-stage screening system, with structured interview most often carried out by primary care and mental health professionals. Most of the studies employed a cut-off score of 10 (N=24, 57%, total range 5 – 15). The overall sensitivity of PHQ-9 ranged from 0.37 to 0.98, specificity from 0.42 to 0.99, positive predictive value from 0.09 to 0.92, and negative predictive value from 0.8 to 1. Limitations Lack of longitudinal studies, small sample size, and the heterogeneity of primary-care settings limited the generalizability of our results. Conclusions PHQ-9 has been widely validated and is recommended in a two-stage screening process. Longitudinal studies are necessary to provide evidence of long-term screening effectiveness.
Article
Background The association of adolescent social media use with mental health symptoms, especially depression, has recently attracted a great deal of interest in public media as well as the scientific community. Some studies have cited statistically significant associations between adolescent social media use and depression and have proposed that parents must regulate their adolescents’ social media use in order to protect their mental health. Method In order to rigorously assess the size of the effect that has been reported in the current scientific literature, we conducted a meta-analysis of studies that measured the association between social media use specifically and depressive symptoms amongst early- to mid- adolescents (11-18 years-old). We searched Psychnet, PubMed, and Web of Science with the following terms: online social networks, social media, internet usage, facebook, twitter, instagram, myspace, snapchat, and depression. Results We found a small but significant positive correlation (k=11 studies, r=.11, p<.01) between adolescent social media use and depressive symptoms. There was also high heterogeneity (I²=95.22%) indicating substantial variation among studies. Conclusions High heterogeneity along with the small overall effect size observed in the relationship between self-reported social media use and depressive symptoms suggests that other factors are likely to act as significant moderators of the relationship. We suggest that future research should be focused on understanding which types of use may be harmful (or helpful) to mental health, rather than focusing on overall use measures that likely reflect highly heterogeneous exposures.
Article
News—real or fake—is now abundant on social media. News posts on social media focus users’ attention on the headlines, but does it matter who wrote the article? We investigate whether changing the presentation format to highlight the source of the article affects its believability and how social media users choose to engage with it. We conducted two experiments and found that nudging users to think about who wrote the article influenced the extent to which they believed it. The presentation format of highlighting the source had a main effect; it made users more skeptical of all articles, regardless of the source’s credibility. For unknown sources, low source ratings had a direct effect on believability. Believability, in turn, influenced the extent to which users would engage with the article (e.g., read, like, comment, and share). We also found confirmation bias to be rampant: users were more likely to believe articles that aligned with their beliefs, over and above the effects of other factors.
Article
To the Editor Drs Leichsenring and Steinert argued that cognitive behavioral therapies (CBTs) should not be the gold standard treatment for mental disorders.¹ They acknowledged that CBTs have been more widely studied than other forms of therapy but suggested that other treatments should be considered equivalent to CBTs unless evidence emerges to suggest otherwise. In doing so, they shifted the burden of evidence about the efficacy of other treatments away from those treatments and onto the evidence base for CBTs. In other areas of medicine, treatments with a broader positive evidence base are not considered equal to less widely studied treatments.
Article
Since the introduction of Beck’s cognitive theory of emotional disorders, and their treatment with psychotherapy, cognitive-behavioral approaches have become the most extensively researched psychological treatment for a wide variety of disorders. Despite this, the relative contribution of cognitive to behavioral approaches to treatment are poorly understood and the mechanistic role of cognitive change in therapy is widely debated. We critically review this literature, focusing on the mechanistic role of cognitive change across cognitive and behavioral therapies for depressive and anxiety disorders.
Chapter
This chapter reviews theoretical developments and empirical studies related to causal inference on social networks from both experimental and observational studies. Discussion is given to the effect of experimental interventions on outcomes and behaviors and how these effects relate to the presence of social ties, the position of individuals within the network, and the underlying structure and properties of the network. The effects of such experimental interventions on changing the network structure itself and potential feedback between behaviors and network changes are also discussed. With observational data, correlations in behavior or outcomes between individuals with network ties may be due to social influence, homophily, or environmental confounding. With cross-sectional data these three sources of correlation cannot be distinguished. Methods employing longitudinal observational data that can help distinguish between social influence, homophily, and environmental confounding are described, along with their limitations. Proposals are made regarding future research directions and methodological developments that would help put causal inference on social networks on a firmer theoretical footing.