Conference Paper

Disagree? You Must be a Bot! How Beliefs Shape Twitter Profile Perceptions

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Based on previous results on the interplay of humanness and partisanship (Wischnewski et al., 2021;Yan et al., 2020), we assume that the effects of humanness and partisanship influence each other. Opinion congruency might be more pronounced for highly humanlike accounts, whereas opinion congruency might matter less when accounts are less human-like. ...
... However, previous studies have also found that motivated reasoning also affects how users perceive the humanness of profiles. Results indicate that users perceive opinion-congruent accounts as more human-like than opinion-incongruent accounts, which were perceived as less human-like and more bot-like (Wischnewski et al., 2021;Yan et al., 2020). Building on these results, we take our assumptions in H1-H4 one step further and suggest a mediating role of account perceptions. ...
... This would If a participant's partisanship and the displayed partisanship of a profile match (congruent condition), the participant is more likely to engage with the profile as compared with the non-matching condition. This effect can partly be explained by the effect of partisanship congruency on perceived humanness (see Wischnewski et al., 2021;Yan et al., 2020), where congruent profiles are perceived as more human-like than incongruent profiles. ...
Article
Full-text available
This article investigates under which conditions users on Twitter engage with or react to social bots. Based on insights from human–computer interaction and motivated reasoning, we hypothesize that (1) users are more likely to engage with human-like social bot accounts and (2) users are more likely to engage with social bots which promote content congruent to the user’s partisanship. In a preregistered 3 × 2 within-subject experiment, we asked N = 223 US Americans to indicate whether they would engage with or react to different Twitter accounts. Accounts systematically varied in their displayed humanness (low humanness, medium humanness, and high humanness) and partisanship (congruent and incongruent). In line with our hypotheses, we found that the more human-like accounts are, the greater is the likelihood that users would engage with or react to them. However, this was only true for accounts that shared the same partisanship as the user.
... Parts of the available bot research seem to support this conclusion. Experimental data reported by Wischnewski et al. (2021) suggest that social media users tend to perceive partisan accounts as more credible. This perception, in turn, hampers their ability to differentiate between humans and bots. ...
... For instance, Yan et al. (2021) had to exclude 37% of their initial sample who identified as Independents, in order to test partisan bias effects among US-Democrats and US-Republicans. Furthermore, Wischnewski et al. (2021) noticed that their findings should be interpreted with caution, since the main effect of partisan congruency on bot identification was contingent on a set of covariates. Finally, both research teams introduced additional stimulus variability by manipulating the level of ambiguity of the presented accounts. ...
Article
Full-text available
Social bots, employed to manipulate public opinion, pose a novel threat to digital societies. Existing bot research has emphasized technological aspects while neglecting psychological factors shaping human–bot interactions. This research addresses this gap within the context of the US‐American electorate. Two datasets provide evidence that partisanship distorts (a) online users' representation of bots, (b) their ability to identify them, and (c) their intentions to interact with them. Study 1 explores global bot perceptions on through survey data from N = 452 Twitter (now X) users. Results suggest that users tend to attribute bot‐related dangers to political adversaries, rather than recognizing bots as a shared threat to political discourse. Study 2 ( N = 619) evaluates the consequences of such misrepresentations for the quality of online interactions. In an online experiment, participants were asked to differentiate between human and bot profiles. Results indicate that partisan leanings explained systematic judgement errors. The same data suggest that participants aim to avoid interacting with bots. However, biased judgements may undermine this motivation in praxis. In sum, the presented findings underscore the importance of interdisciplinary strategies that consider technological and human factors to address the threats posed by bots in a rapidly evolving digital landscape.
... Unlike previous research we do not rely on settings where participants are faced with constructed bot accounts in an experimental setup and later surveyed for their experiences (Yan et al. 2021;Wischnewski et al. 2021), but follow an empirical approach and analyze inter-user communications, in particular situations in which one Twitter user accuses another one of being a bot. 2 This allows us to not only explore the characteristics of the accounts frequently accused as bots by other Twitter users, but also gives us insights into the topical contexts as well as the motivation and reasoning provided in the accusations, as they often contain justification for the verdict. Leveraging data from Twitter's inception in 2007, we explore the context and meaning of the bot accusations from different perspectives and track their evolution over the long term. ...
... While they have to reject this hypothesis for the main set of participants, they find that it holds true for the more experienced Twitter users, leading them to speculate that this effect stems from a different usage of the term (social) bot between the two groups. According to the authors, "participants with prior knowledge of social bots and participants who spend more time on social media might apply the term social bot as a pejorative term to indicate disagreement and discredit accounts" (Wischnewski et al. 2021). While they speculate that the desire to "show disagreement by labeling accounts as social bots (expressive disagreement)" amplifies the effect of motivated reasoning, they do not find other evidence for this claim than a single blog post from 2019 4 and a general reference to "popular media". ...
Article
The characterization and detection of bots with their presumed ability to manipulate society on social media platforms have been subject to many research endeavors over the last decade. In the absence of ground truth data (i.e., accounts that are labeled as bots by experts or self-declare their automated nature), researchers interested in the characterization and detection of bots may want to tap into the wisdom of the crowd. But how many people need to accuse another user as a bot before we can assume that the account is most likely automated? And more importantly, are bot accusations on social media at all a valid signal for the detection of bots? Our research presents the first large-scale study of bot accusations on Twitter and shows how the term bot became an instrument of dehumanization in social media conversations since it is predominantly used to deny the humanness of conversation partners. Consequently, bot accusations on social media should not be naively used as a signal to train or test bot detection models.
... Unlike previous research we do not rely on settings where participants are faced with constructed bot accounts in an experimental setup and later surveyed for their experiences (Yan et al. 2021;Wischnewski et al. 2021), but follow an empirical approach and analyze inter-user communications, in particular situations in which one Twitter user accuses another one of being a bot. 1 This allows us to not only explore the characteristics of the accounts frequently accused as bots by other Twitter users, but also gives us insights into the topical contexts as well as the motivation and reasoning provided in the accusations, as they often contain justification for the verdict. Leveraging data from Twitter's inception in 2007, we explore the context and meaning of the bot accusations from different perspectives and track their evolution over the long term. ...
... While they have to reject this hypothesis for the main set of participants, they find that it holds true for the more experienced Twitter users, leading them to speculate that this effect stems from a different usage of the term (social) bot between the two groups. According to the authors, "participants with prior knowledge of social bots and participants who spend more time on social media might apply the term social bot as a pejorative term to indicate disagreement and discredit accounts" (Wischnewski et al. 2021). While they speculate that the desire to "show disagreement by labeling accounts as social bots (expressive disagreement)" amplifies the effect of motivated reasoning, they do not find other evidence for this claim than a single blog post from 2019 and a general refer-ence to "popular media". ...
Preprint
Full-text available
The characterization and detection of social bots with their presumed ability to manipulate society on social media platforms have been subject to many research endeavors over the last decade, leaving a research gap on the impact of bots and accompanying phenomena on platform users and society. In this systematic data-driven study, we explore the users' perception of the construct bot at a large scale, focusing on the evolution of bot accusations over time. We create and analyze a novel dataset consisting of bot accusations that have occurred on the social media platform Twitter since 2007, providing insights into the meanings and contexts of these particular communication situations. We find evidence that over time the term bot has moved away from its technical meaning to become an "insult" specifically used in polarizing discussions to discredit and ultimately dehumanize the opponent.
... Documenting and understanding these algorithmic perceptions is a well-established field in the HCI community [21,23,24,37,73,81,99,121]. Prior works about social media algorithm perceptions focus on advertisements [85,88], social media feeds [23,24,33], misinformation [15,16,115], and several other social media features relying on algorithm decision making [19,21,37]. Prior studies explore models based on patents [2] or focused on feed curation [31], and they have not explicitly exposed the functionality of authentic models to study participants. ...
Preprint
Full-text available
Machine learning models deployed locally on social media applications are used for features, such as face filters which read faces in-real time, and they expose sensitive attributes to the apps. However, the deployment of machine learning models, e.g., when, where, and how they are used, in social media applications is opaque to users. We aim to address this inconsistency and investigate how social media user perceptions and behaviors change once exposed to these models. We conducted user studies (N=21) and found that participants were unaware to both what the models output and when the models were used in Instagram and TikTok, two major social media platforms. In response to being exposed to the models' functionality, we observed long term behavior changes in 8 participants. Our analysis uncovers the challenges and opportunities in providing transparency for machine learning models that interact with local user data.
... The biases of annotators can therefore propagate through the pipeline and affect downstream tasks. As a few studies have already revealed perceptual biases in human-bot interactions 17,23 , more research is needed. ...
Article
Full-text available
Automated accounts on social media that impersonate real users, often called “social bots,” have received a great deal of attention from academia and the public. Here we present experiments designed to investigate public perceptions and policy preferences about social bots, in particular how they are affected by exposure to bots. We find that before exposure, participants have some biases: they tend to overestimate the prevalence of bots and see others as more vulnerable to bot influence than themselves. These biases are amplified after bot exposure. Furthermore, exposure tends to impair judgment of bot-recognition self-efficacy and increase propensity toward stricter bot-regulation policies among participants. Decreased self-efficacy and increased perceptions of bot influence on others are significantly associated with these policy preference changes. We discuss the relationship between perceptions about social bots and growing dissatisfaction with the polluted social media environment.
... Digital marketing perlu memperbaruhi terus untuk masalah kontennya, karena perubahan konten mampu menarik konsumen. (Abdurohim, 2021cNurhafida & Sembiring, 2021;Wischnewski et al., 2021). ...
Book
Full-text available
Sistematika buku Pemasaran Era Kini: Pendekatan Berbasis Digital ini mengacu pada pendekatan konsep teoritis dan contoh penerapan. Buku ini terdiri atas 16 bab yang dibahas secara rinci, diantaranya: Bab 1 Pengantar dan Konsep Dasar Pemasaran Digital, Bab 2 Perilaku Konsumen Di Era Digital, Bab 3 Digital Marketing Vs Tradisional Marketing, Bab 4 Strategi Pemasaran Digital, Bab 5 Komunikasi Pemasaran Digital, Bab 6 Digital Customer Relationship Management, Bab 7 Social Media Marketing Strategy, Bab 8 Media Sosial dan Keterlibatan Konsumen, Bab 9 Design Bisnis Media Sosial, Bab 10 Sosial Media Endorser dan Sosial Media Platform, Bab 11 Aplikasi Sosial dan Grafik Sosial, Bab 12 Sosial Media dan Sosial Media Channels, Bab 13 e-Consumer dan e-WOM, Bab 14 Online Marketplace, Bab 15 Business to Business (B2B) dan Business to Consumer (B2C), dan Bab 16 Manfaat Pemasaran Digital Bagi UMKM.
... Digital marketing perlu memperbaruhi terus untuk masalah kontennya, karena perubahan konten mampu menarik konsumen. (Abdurohim, 2021cNurhafida & Sembiring, 2021;Wischnewski et al., 2021). ...
Book
Full-text available
Sistematika buku Pemasaran Era Kini: Pendekatan Berbasis Digital ini mengacu pada pendekatan konsep teoritis dan contoh penerapan. Buku ini terdiri atas 16 bab yang dibahas secara rinci, diantaranya: Bab 1 Pengantar dan Konsep Dasar Pemasaran Digital, Bab 2 Perilaku Konsumen Di Era Digital, Bab 3 Digital Marketing Vs Tradisional Marketing, Bab 4 Strategi Pemasaran Digital, Bab 5 Komunikasi Pemasaran Digital, Bab 6 Digital Customer Relationship Management, Bab 7 Social Media Marketing Strategy, Bab 8 Media Sosial dan Keterlibatan Konsumen, Bab 9 Design Bisnis Media Sosial, Bab 10 Sosial Media Endorser dan Sosial Media Platform, Bab 11 Aplikasi Sosial dan Grafik Sosial, Bab 12 Sosial Media dan Sosial Media Channels, Bab 13 e-Consumer dan e-WOM, Bab 14 Online Marketplace, Bab 15 Business to Business (B2B) dan Business to Consumer (B2C), dan Bab 16 Manfaat Pemasaran Digital Bagi UMKM.
... At the same time, related studies have found a tendency for humans to extend real-life psychological dynamics to artificial entities [18]. For example, one researcher analyzed the ability of humans to distinguish between political social bots and humans on Twitter and found that users with disagreement were often assessed as more bot-like than users with the agreement [19]. It has also been found that the overall engagement of users' social networks predicts interactions and responses with social bots, the number of friends and followers predicts whether users will interact with bots [20]. ...
Article
Full-text available
In the field of social media, the systematic impact that bot users bring to the dissemination of public opinion has been a key concern of the research. To achieve more effective opinion management, it is important to understand how and why behavior differs between bot users and human users. The study compares the differences in behavioral characteristics and diffusion mechanisms between bot users and human users during public opinion dissemination, using public health emergencies as the research target, and further provides specific explanations for the differences. First, the study classified users with bot characteristics and human users by establishing the relevant formulas of user indicator characteristics. Secondly, the study used deep learning methods such as Top2Vec and BERT to extract topics and sentiments, and used social network analysis methods to construct network graphs and compare network attribute features. Finally, the study further compared the differences in information dissemination between posts published by bot users and human users through multi-factor ANOVA. It was found that there were significant differences in behavioral characteristics and diffusion mechanisms between bot users and human users. The findings can help guide the public to pay attention to topic shifting and promote the diffusion of positive emotions in social networks, which in turn can better achieve emergency management of emergencies and the maintenance of online orders.
Article
With the rise and prevalence of social bots, their negative impacts on society are gradually recognized, prompting research attention to effective detection and countermeasures. Recently, graph neural networks (GNNs) have flourished and have been applied to social bot detection research, improving the performance of detection methods effectively. However, existing GNN-based social bot detection methods often fail to account for the heterogeneous associations among users within social media contexts, especially the heterogeneous integration of social bots into human communities within the network. To address this challenge, we propose a heterogeneous compatibility perspective for social bot detection, in which we preserve more detailed information about the varying associations between neighbors in social media contexts. Subsequently, we develop a compatibility-aware graph neural network (CGNN) for social bot detection. CGNN consists of an efficient feature processing module, and a lightweight compatibility-aware GNN encoder, which enhances the model’s capacity to depict heterogeneous neighbor relations by emulating the heterogeneous compatibility function. Through extensive experiments, we showed that our CGNN outperforms the existing state-of-the-art (SOTA) method on three commonly used social bot detection benchmarks while utilizing only about 2% of the parameter size and 10% of the training time compared with the SOTA method. Finally, further experimental analysis indicates that CGNN can identify different edge categories to a significant extent. These findings, along with the ablation study, provide strong evidence supporting the enhancement of GNN’s capacity to depict heterogeneous neighbor associations on social media bot detection tasks.
Article
Dijital çağın en önemli problemlerinden biri dezenformasyonun yaygınlaşmasıdır. Bugün, Türkiye’de ve dünyada yapay zekâ kullanılarak oluşturulan deepfake ürünlerinin toplum üzerindeki etkileri tartışılmaktadır. Doğru, yalan, önyargılı, yanıltıcı her türlü içerik, sosyal medya ve dijital platformlar aracılığıyla kolayca yayılmaktadır. Ayrıca siyasetçiler, gazeteciler, siyasi partiler, sanatçılar ve şirketler dezenformasyona maruz kalmaktadır. Sosyal medya sunucularının, dijital platformların ve devletlerin, kamusal güvenliğin üzerinde etkileri olan dezenformasyon içeriklerine karşı yapay zekâ teknikleri kullanarak tedbirler alması gerekmektedir. Yapılan alan yazın taramalarında, dezenformasyona karşı çok sayıda tekniğin yer aldığı fakat üretilen dezenformasyon içeriklerine, yapay zekâ kullanılan anti-dezenformasyon teknikleriyle karşı koyulabileceği görülmektedir. Bu bağlamda, yapay zekâ kullanılarak üretilen dezenformasyon içeriklerine ancak yapay zekâ ile üretilen tekniklerle karşı durulabileceği sonucuna ulaşılmıştır.
Chapter
The idea that social media platforms like Twitter are inhabited by vast numbers of social bots has become widely accepted in recent years. Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion. They are credited with the ability to produce content autonomously and to interact with human users. Social bot activity has been reported in many different political contexts, including the U.S. presidential elections, discussions about migration, climate change, and COVID-19. However, the relevant publications either use crude and questionable heuristics to discriminate between supposed social bots and humans or—in the vast majority of the cases—fully rely on the output of automatic bot detection tools, most commonly Botometer. In this paper, we point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots. Furthermore, we empirically investigate the validity of peer-reviewed Botometer-based studies by closely and systematically inspecting hundreds of accounts that had been counted as social bots. We were unable to find a single social bot. Instead, we found mostly accounts undoubtedly operated by human users, the vast majority of them using Twitter in an inconspicuous and unremarkable fashion without the slightest traces of automation. We conclude that studies claiming to investigate the prevalence, properties, or influence of social bots based on Botometer have, in reality, just investigated false positives and artifacts of this approach.KeywordsSocial botsBot detectionBotometerFalse positives
Article
Full-text available
Partisan disagreement over policy-relevant facts is a salient feature of contemporary American politics. Perhaps surprisingly, such disagreements are often the greatest among opposing partisans who are the most cognitively sophisticated. A prominent hypothesis for this phenomenon is that cognitive sophistication magnifies politically motivated reasoning-commonly defined as reasoning driven by the motivation to reach conclusions congenial to one's political group identity. Numerous experimental studies report evidence in favor of this hypothesis. However, in the designs of such studies, political group identity is often confounded with prior factual beliefs about the issue in question; and, crucially, reasoning can be affected by such beliefs in the absence of any political group motivation. This renders much existing evidence for the hypothesis ambiguous. To shed new light on this issue, we conducted three studies in which we statistically controlled for people's prior factual beliefs-attempting to isolate a direct effect of political group identity-when estimating the association between their cognitive sophistication, political group identity, and reasoning in the paradigmatic study design used in the literature. We observed a robust direct effect of political group identity on reasoning but found no evidence that cognitive sophistication magnified this effect. In contrast, we found fairly consistent evidence that cognitive sophistication magnified a direct effect of prior factual beliefs on reasoning. Our results suggest that there is currently a lack of clear empirical evidence that cognitive sophistication magnifies politically motivated reasoning as commonly understood and emphasize the conceptual and empirical challenges that confront tests of this hypothesis. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Article
Full-text available
The move of news audiences to social media has presented a major challenge for news organizations. How to adapt and adjust to this social media environment is an important issue for sustainable news business. News bots are one of the key technologies offered in the current media environment and are widely applied in news production, dissemination, and interaction with audiences. While benefits and concerns coexist about the application of bots in news organizations, the current study aimed to examine how social media users perceive news bots, the factors that affect their acceptance of bots in news organizations, and how this is related to their evaluation of social media news in general. An analysis of the US national survey dataset showed that self-efficacy (confidence in identifying content from a bot) was a successful predictor of news bot acceptance, which in turn resulted in a positive evaluation of social media news in general. In addition, an individual’s perceived prevalence of social media news from bots had an indirect effect on acceptance by increasing self-efficacy. The results are discussed with the aim of providing a better understanding of news audiences in the social media environment, and practical implications for the sustainable news business are suggested.
Article
Full-text available
Political astroturfing, a centrally coordinated disinformation campaign in which participants pretend to be ordinary citizens acting independently, has the potential to influence electoral outcomes and other forms of political behavior. Yet, it is hard to evaluate the scope and effectiveness of political astroturfing without “ground truth” information, such as the verified identity of its agents and instigators. In this paper, we study the South Korean National Information Service’s (NIS) disinformation campaign during the presidential election in 2012, taking advantage of a list of participating accounts published in court proceedings. Features that best distinguish these accounts from regular users in contemporaneously collected Twitter data are traces left by coordination among astroturfing agents, instead of the individual account characteristics typically used in related approaches such as social bot detection. We develop a methodology that exploits these distinct empirical patterns to identify additional likely astroturfing accounts and validate this detection strategy by analyzing their messages and current account status. However, an analysis relying on Twitter influence metrics shows that the known and suspect NIS accounts only had a limited impact on political social media discussions. By using the principal-agent framework to analyze one of the earliest revealed instances of political astroturfing, we improve on extant methodological approaches to detect disinformation campaigns and ground them more firmly in social science theory.
Article
Full-text available
Objectives: To understand how Twitter bots and trolls ("bots") promote online health content. Methods: We compared bots' to average users' rates of vaccine-relevant messages, which we collected online from July 2014 through September 2017. We estimated the likelihood that users were bots, comparing proportions of polarized and antivaccine tweets across user types. We conducted a content analysis of a Twitter hashtag associated with Russian troll activity. Results: Compared with average users, Russian trolls (χ2(1) = 102.0; P < .001), sophisticated bots (χ2(1) = 28.6; P < .001), and "content polluters" (χ2(1) = 7.0; P < .001) tweeted about vaccination at higher rates. Whereas content polluters posted more antivaccine content (χ2(1) = 11.18; P < .001), Russian trolls amplified both sides. Unidentifiable accounts were more polarized (χ2(1) = 12.1; P < .001) and antivaccine (χ2(1) = 35.9; P < .001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive. Conclusions: Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination. Public Health Implications. Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content. (Am J Public Health. Published online ahead of print August 23, 2018: e1-e7. doi:10.2105/AJPH.2018.304567).
Article
Full-text available
Social media is an amazing platform for enhancing public exposure. Anyone, even social bots, can reach out to a vast community and expose one's opinion. But what happens when fake news is (un)intentionally spread within a social media? This paper reviews techniques that can be used to fabricate fake news and depicts a scenario where social bots evolve in a fully semantic Web to infest social media with automatically generated deceptive information.
Conference Paper
Full-text available
The rise of web services and popularity of online social networks (OSN) like Facebook, Twitter, LinkedIn etc. have led to the rise of unwelcome social bots as automated social actors. Those actors can play many malicious roles including infiltrators of human conversations, scammers, impersonators, misinformation disseminators, stock market manipulators, astroturfers, and any content polluter (spammers, malware spreaders) and so on. It is undeniable that social bots have major importance on social networks. Therefore, this paper reveals the potential hazards of malicious social bots, reviews the detection techniques within a methodological categorization and proposes avenues for future research.
Article
Full-text available
Background: As e-cigarette use rapidly increases in popularity, data from online social systems (Twitter, Instagram, Google Web Search) can be used to capture and describe the social and environmental context in which individuals use, perceive, and are marketed this tobacco product. Social media data may serve as a massive focus group where people organically discuss e-cigarettes unprimed by a researcher, without instrument bias, captured in near real time and at low costs. Objective: This study documents e-cigarette-related discussions on Twitter, describing themes of conversations and locations where Twitter users often discuss e-cigarettes, to identify priority areas for e-cigarette education campaigns. Additionally, this study demonstrates the importance of distinguishing between social bots and human users when attempting to understand public health-related behaviors and attitudes. Methods: E-cigarette-related posts on Twitter (N=6,185,153) were collected from December 24, 2016, to April 21, 2017. Techniques drawn from network science were used to determine discussions of e-cigarettes by describing which hashtags co-occur (concept clusters) in a Twitter network. Posts and metadata were used to describe where geographically e-cigarette-related discussions in the United States occurred. Machine learning models were used to distinguish between Twitter posts reflecting attitudes and behaviors of genuine human users from those of social bots. Odds ratios were computed from 2x2 contingency tables to detect if hashtags varied by source (social bot vs human user) using the Fisher exact test to determine statistical significance. Results: Clusters found in the corpus of hashtags from human users included behaviors (eg, #vaping), vaping identity (eg, #vapelife), and vaping community (eg, #vapenation). Additional clusters included products (eg, #eliquids), dual tobacco use (eg, #hookah), and polysubstance use (eg, #marijuana). Clusters found in the corpus of hashtags from social bots included health (eg, #health), smoking cessation (eg, #quitsmoking), and new products (eg, #ismog). Social bots were significantly more likely to post hashtags that referenced smoking cessation and new products compared to human users. The volume of tweets was highest in the Mid-Atlantic (eg, Pennsylvania, New Jersey, Maryland, and New York), followed by the West Coast and Southwest (eg, California, Arizona and Nevada). Conclusions: Social media data may be used to complement and extend the surveillance of health behaviors including tobacco product use. Public health researchers could harness these data and methods to identify new products or devices. Furthermore, findings from this study demonstrate the importance of distinguishing between Twitter posts from social bots and humans when attempting to understand attitudes and behaviors. Social bots may be used to perpetuate the idea that e-cigarettes are helpful in cessation and to promote new products as they enter the marketplace.
Article
Full-text available
In this article, we present results on the identification and behavioral analysis of social bots in a sample of 542,584 Tweets, collected before and after Japan's 2014 general election. Typical forms of bot activity include massive Retweeting and repeated posting of (nearly) the same message, sometimes used in combination. We focus on the second method and present (1) a case study on several patterns of bot activity, (2) methodological considerations on the automatic identification of such patterns and the prerequisite near-duplicate detection, and (3) we give qualitative insights into the purposes behind the usage of social/political bots. We argue that it was in the latency of the semi-public sphere of social media-and not in the visible or manifest public sphere (official campaign platform, mass media)-where Shinzō Abe's hidden nationalist agenda interlocked and overlapped with the one propagated by organizations such as Nippon Kaigi and Internet right-wingers (netto uyo) during the election campaign, the latter potentially forming an enormous online support army of Abe's agenda.
Article
Full-text available
In this article, we uncover a network of Twitterbots comprising 13,493 accounts that tweeted the United Kingdom European Union membership referendum, only to disappear from Twitter shortly after the ballot. We compare active users to this set of political bots with respect to temporal tweeting behavior, the size and speed of retweet cascades, and the composition of their retweet cascades (user-to-bot vs. bot-to-bot) to evidence strategies for bot deployment. Our results move forward the analysis of political bots by showing that Twitterbots can be effective at rapidly generating small- to medium-sized cascades; that the retweeted content comprises user-generated hyperpartisan news, which is not strictly fake news, but whose shelf life is remarkably short; and, finally, that a botnet may be organized in specialized tiers or clusters dedicated to replicating either active users or content generated by other bots.
Article
Full-text available
Social bots are currently regarded an influential but also somewhat mysterious factor in public discourse and opinion making. They are considered to be capable of massively distributing propaganda in social and online media and their application is even suspected to be partly responsible for recent election results. Astonishingly, the term `Social Bot' is not well defined and different scientific disciplines use divergent definitions. This work starts with a balanced definition attempt, before providing an overview of how social bots actually work (taking the example of Twitter) and what their current technical limitations are. Despite recent research progress in Deep Learning and Big Data, there are many activities bots cannot handle well. We then discuss how bot capabilities can be extended and controlled by integrating humans into the process and reason that this is currently the most promising way to go in order to realize effective interactions with other humans.
Conference Paper
Full-text available
While most online social media accounts are controlled by humans, these platforms also host automated agents called social bots or sybil accounts. Recent literature reported on cases of social bots imitating humans to manipulate discussions, alter the popularity of users, pollute content and spread misinformation, and even perform terrorist propaganda and recruitment actions. Here we present BotOrNot, a publicly-available service that leverages more than one thousand features to evaluate the extent to which a Twitter account exhibits similarity to the known characteristics of social bots. Since its release in May 2014, BotOrNot has served over one million requests via our website and APIs.
Article
Full-text available
From politicians and nation states to terrorist groups, numerous organizations reportedly conduct explicit campaigns to influence opinions on social media, posing a risk to freedom of expression. Thus, there is a need to identify and eliminate "influence bots"--realistic, automated identities that illicitly shape discussions on sites like Twitter and Facebook--before they get too influential.
Conference Paper
Full-text available
Technology is rapidly evolving, and with it comes increasingly sophisticated bots (i.e. software robots) which automatically produce content to inform, influence, and deceive genuine users. This is particularly a problem for social media networks where content tends to be extremely short, informally written, and full of inconsistencies. Motivated by the rise of bots on these networks, we investigate the ease with which a bot can deceive a human. In particular, we focus on deceiving a human into believing that an automatically generated sample of text was written by a human, as well as analysing which factors affect how convincing the text is. To accomplish this, we train a set of models to write text about several distinct topics, to simulate a bot's behaviour, which are then evaluated by a panel of judges. We find that: (1) typical Internet users are twice as likely to be deceived by automated content than security researchers; (2) text that disagrees with the crowd's opinion is more believably human; (3) light-hearted topics such as Entertainment are significantly easier to deceive with than factual topics such as Science; and (4) automated text on Adult content is the most deceptive regardless of a user's background.
Conference Paper
Full-text available
This paper identifies and evaluates key factors that influence credibility perception in microblogs. Specifically, we report on a demographic survey (N=81) followed by a user experiment (N=102) in order to answer the following research questions: (1) What are the important cues that contribute to information being perceived as credible? and (2) To what extent is such a quantification portable across different microblogging platforms? To answer the second question, we study two popular microblogs, Reddit and Twitter. Key results include that significant effects of individual factors can be isolated, are portable, and that metadata and image type elements are, in general, the strongest influencing factors in credibility assessments.
Article
Full-text available
Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.
Article
Full-text available
Party identification is central to the study of American political behavior yet there remains disagreement over whether it is largely instrumental or expressive in nature. We draw on social identity theory to develop the expressive model and conduct four studies to compare it to an instrumental explanation of campaign involvement. We find strong support for the expressive model: a multi-item partisan identity scale better accounts for campaign activity than a strong stance on subjectively important policy issues, strength of ideological self-placement, or a measure of ideological identity. A series of experiments underscore the power of partisan identity to generate action-oriented emotions that drive campaign activity. Strongly identified partisans feel angrier than weaker partisans when threatened with electoral loss and more positive when reassured of victory. In contrast, those who hold a strong and ideologically consistent position on issues are no more aroused emotionally than others by party threats or reassurances. In addition, threat and reassurance to the party’s status aroused greater anger and enthusiasm among partisans than a threatened loss or victory to central policy issues. Our findings underscore the power of an expressive partisan identity to drive campaign involvement and generate strong emotional reactions to ongoing campaign events.
Article
Full-text available
Objective To determine the evidence of effectiveness and safety for introduction of five recent and ostensibly high value implantable devices in major joint replacement to illustrate the need for change and inform guidance on evidence based introduction of new implants into healthcare. Design Systematic review of clinical trials, comparative observational studies, and registries for comparative effectiveness and safety of five implantable device innovations. Data sources PubMed (Medline), Embase, Web of Science, Cochrane, CINAHL, reference lists of articles, annual reports of major registries, summaries of safety and effectiveness for pre-market application and mandated post-market studies at the US Food and Drug Administration. Study selection The five selected innovations comprised three in total hip replacement (ceramic-on-ceramic bearings, modular femoral necks, and uncemented monoblock cups) and two in total knee replacement (high flexion knee replacement and gender specific knee replacement). All clinical studies of primary total hip or knee replacement for symptomatic osteoarthritis in adults that compared at least one of the clinical outcomes of interest (patient centred outcomes or complications, or both) in the new implant group and control implant group were considered. Data searching, abstraction, and analysis were independently performed and confirmed by at least two authors. Quantitative data syntheses were performed when feasible. Results After assessment of 10 557 search hits, 118 studies (94 unique study cohorts) met the inclusion criteria and reported data related to 15 384 implants in 13 164 patients. Comparative evidence per device innovation varied from four low to moderate quality retrospective studies (modular femoral necks) to 56 studies of varying quality including seven high quality (randomised) studies (high flexion knee replacement). None of the five device innovations was found to improve functional or patient reported outcomes. National registries reported two to 12 year follow-up for revision occurrence related to more than 200 000 of these implants. Reported comparative data with well established alternative devices (over 1 200 000 implants) did not show improved device survival. Moreover, we found higher revision occurrence associated with modular femoral necks (hazard ratio 1.9) and ceramic-on-ceramic bearings (hazard ratio 1.0-1.6) in hip replacement and with high flexion knee implants (hazard ratio 1.0-1.8). Conclusion We did not find convincing high quality evidence supporting the use of five substantial, well known, and already implemented device innovations in orthopaedics. Moreover, existing devices may be safer to use in total hip or knee replacement. Improved regulation and professional society oversight are necessary to prevent patients from being further exposed to these and future innovations introduced without proper evidence of improved clinical efficacy and safety.
Article
Full-text available
The Turing test asked whether one could recognize the behavior of a human from that of a computer algorithm. Today this question has suddenly become very relevant in the context of social media, where text constraints limit the expressive power of humans, and real incentives abound to develop human-mimicking software agents called social bots. These elusive entities wildly populate social media ecosystems, often going unnoticed among the population of real people. Bots can be benign or harmful, aiming at persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then discuss current efforts aimed at detection of social bots in Twitter. Characteristics related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.
Article
Full-text available
Although widely studied in other domains, relatively little is known about the metacognitive processes that monitor and control behaviour during reasoning and decision-making. In this paper, we examined the conditions under which two fluency cues are used to monitor initial reasoning: answer fluency, or the speed with which the initial, intuitive answer is produced (Thompson, Prowse Turner, & Pennycook, 2011), and perceptual fluency, or the ease with which problems can be read (Alter, Oppenheimer, Epley, & Eyre, 2007). The first two experiments demonstrated that answer fluency reliably predicted Feeling of Rightness (FOR) judgments to conditional inferences and base rate problems, which subsequently predicted the amount of deliberate processing as measured by thinking time and answer changes; answer fluency also predicted retrospective confidence judgments (Experiment 3b). Moreover, the effect of answer fluency on reasoning was independent from the effect of perceptual fluency, establishing that these are empirically independent constructs. In five experiments with a variety of reasoning problems similar to those of Alter et al. (2007), we found no effect of perceptual fluency on FOR, retrospective confidence or accuracy; however, we did observe that participants spent more time thinking about hard to read stimuli, although this additional time did not result in answer changes. In our final two experiments, we found that perceptual disfluency increased accuracy on the CRT (Frederick, 2005), but only amongst participants of high cognitive ability. As Alter et al.'s samples were gathered from prestigious universities, collectively, the data to this point suggest that perceptual fluency prompts additional processing in general, but this processing may results in higher accuracy only for the most cognitively able.
Conference Paper
Full-text available
Twitter is a new web application playing dual roles of online social networking and micro-blogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: (1) an entropy-based component, (2) a machine-learning-based component, (3) an account properties component, and (4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.
Article
Full-text available
Can human beings relate to computer or television programs in the same way they relate to other human beings? Based on numerous psychological studies, this book concludes that people not only can but do treat computers, televisions, and new media as real people and places. Studies demonstrate that people are "polite" to computers; that they treat computers with female voices differently than "male" ones; that large faces on a screen can invade our personal space; and that on-screen and real-life motion can provoke the same physical responses. Using everyday language to engage readers interested in psychology, communication, and computer technology, Reeves and Nass detail how this knowledge can help in designing a wide range of media.
Conference Paper
We are entering an era of AI-Mediated Communication (AI-MC) where interpersonal communication is not only mediated by technology, but is optimized, augmented, or generated by artificial intelligence. Our study takes a first look at the potential impact of AI-MC on online self-presentation. In three experiments we test whether people find Airbnb hosts less trustworthy if they believe their profiles have been written by AI. We observe a new phenomenon that we term the Replicant Effect: Only when participants thought they saw a mixed set of AI- and human-written profiles, they mistrusted hosts whose profiles were labeled as or suspected to be written by AI. Our findings have implications for the design of systems that involve AI technologies in online self-presentation and chart a direction for future work that may upend or augment key aspects of Computer-Mediated Communication theory.
Article
Concerns about public misinformation in the United States—ranging from politics to science—are growing. Here, we provide an overview of how and why citizens become (and sometimes remain) misinformed about science. Our discussion focuses specifically on misinformation among individual citizens. However, it is impossible to understand individual information processing and acceptance without taking into account social networks, information ecologies, and other macro-level variables that provide important social context. Specifically, we show how being misinformed is a function of a person’s ability and motivation to spot falsehoods, but also of other group-level and societal factors that increase the chances of citizens to be exposed to correct(ive) information. We conclude by discussing a number of research areas—some of which echo themes of the 2017 National Academies of Sciences, Engineering, and Medicine’s Communicating Science Effectively report—that will be particularly important for our future understanding of misinformation, specifically a systems approach to the problem of misinformation, the need for more systematic analyses of science communication in new media environments, and a (re)focusing on traditionally underserved audiences.
Article
Misinformation often continues to influence people’s memory and inferential reasoning after it has been retracted; this is known as the continued influence effect (CIE). Previous research investigating the role of attitude‐based motivated reasoning in this context has found conflicting results: Some studies have found that worldview can have a strong impact on the magnitude of the CIE, such that retractions are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire. Other studies have failed to find evidence for an effect of attitudes on the processing of misinformation corrections. The present study used political misinformation—specifically fictional scenarios involving misconduct by politicians from left‐wing and right‐wing parties—and tested participants identifying with those political parties. Results showed that in this type of scenario, partisan attitudes have an impact on the processing of retractions, in particular (1) if the misinformation relates to a general assertion rather than just a specific singular event and (2) if the misinformation is congruent with a conservative partisanship.
Article
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
Book
This book is an appreciation of the long and illustrious career of Milton Lodge. Having begun his academic life as a Kremlinologist in the 1960s, Milton Lodge radically shifted gears to become one of the most influential scholars of the past half century working at the intersection of psychology and political science. In borrowing and refashioning concepts from cognitive psychology, social cognition and neuroscience, his work has led to wholesale transformations in the way political scientists understand the mass political mind, as well as the nature and quality of democratic citizenship. In this collection, Lodge’s collaborators and colleagues describe how his work has influenced their own careers, and how his insights have been synthesized into the bloodstream of contemporary political psychology. The volume includes personal reflections from Lodge’s longstanding collaborators as well as original research papers from leading figures in political psychology who have drawn inspiration from the Lodgean oeuvre. Reflecting on his multi-facetted contribution to the study of political psychology, The Feeling, Thinking Citizen illustrates the centrality of Lodge’s work in constructing a psychologically plausible model of the democratic citizen.
Article
Democracies assume accurate knowledge by the populace, but the human attraction to fake and untrustworthy news poses a serious problem for healthy democratic functioning. We articulate why and how identification with political parties – known as partisanship – can bias information processing in the human brain. There is extensive evidence that people engage in motivated political reasoning, but recent research suggests that partisanship can alter memory, implicit evaluation, and even perceptual judgments. We propose an identity-based model of belief for understanding the influence of partisanship on these cognitive processes. This framework helps to explain why people place party loyalty over policy, and even over truth. Finally, we discuss strategies for de-biasing information processing to help to create a shared reality across partisan divides.
Book
Cognitive Illusions (2nd ed.) explores a wide range of fascinating psychological effects in the way we think, judge and remember in our everyday lives. Featuring contributions from leading researchers, the book defines what cognitive illusions are and discusses their theoretical status: Are such illusions proof for a faulty human information-processing system, or do they only represent by-products of otherwise adaptive cognitive mechanisms? Throughout the book, background to phenomena such as illusions of control, overconfidence and hindsight bias are discussed, before considering the respective empirical research, potential explanations of the phenomenon and relevant applied perspectives. Each chapter also features the detailed description of an experiment that can be used as classroom demonstration. Featuring six new chapters, this edition has been thoroughly updated throughout to reflect recent research and changes of focus within the field. This book will be of interest to students and researchers of cognitive illusions, specifically, those focusing on thinking, reasoning, decision making and memory.
Article
Recent accounts from researchers, journalists, as well as federal investigators, reached a unanimous conclusion: social media are systematically exploited to manipulate and alter public opinion. Some disinformation campaigns have been coordinated by means of bots, social media accounts controlled by computer scripts that try to disguise themselves as legitimate human users. In this study, we describe one such operation occurred in the run up to the 2017 French presidential election. We collected a massive Twitter dataset of nearly 17 million posts occurred between April 27 and May 7, 2017 (Election Day). We then set to study the MacronLeaks disinformation campaign: By leveraging a mix of machine learning and cognitive behavioral modeling techniques, we separated humans from bots, and then studied the activities of the two groups taken independently, as well as their interplay. We provide a characterization of both the bots and the users who engaged with them and oppose it to those users who didn't. Prior interests of disinformation adopters pinpoint to the reasons of the scarce success of this campaign: the users who engaged with MacronLeaks are mostly foreigners with a preexisting interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black-market for reusable political disinformation bots.
Article
Social media have been extensively praised for increasing democratic discussion on social issues related to policy and politics. However, what happens when this powerful communication tools are exploited to manipulate online discussion, to change the public perception of political entities, or even to try affecting the outcome of political elections? In this study we investigated how the presence of social media bots, algorithmically driven entities that on the surface appear as legitimate users, affect political discussion around the 2016 U.S. Presidential election. By leveraging state-of-the-art social bot detection algorithms, we uncovered a large fraction of user population that may not be human, accounting for a significant portion of generated content (about one-fifth of the entire conversation). We inferred political partisanships from hashtag adoption, for both humans and bots, and studied spatio-temporal communication, political support dynamics, and influence mechanisms by discovering the level of network embeddedness of the bots. Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the Presidential election. © 2016, Alessandro Bessi and Emilio Ferrara. All Rights Reserved.
Article
This study investigated the effects of message and social cues on selective exposure to political information in a social media environment. Based on the heuristic-systematic model, we hypothesized that readers' selective consideration of specific cues can be explained by situational motivations. In an experiment (N = 137), subjects primed with motivational goals (accuracy, defense, or impression motivations, as well as a control group) were asked to search for information. Participants preferred attitude-consistent information and balanced information over attitude-inconsistent information, and also preferred highly recommended articles. Defense-motivated partisans exhibited a stronger confirmation bias, whereas impression motivation amplified the effects of social recommendations. These findings specify the conditions under which individuals engage in narrow, open-minded, or social patterns of information selection.
Article
abstract: Research in psychology and political science has identified motivated reasoning as a set of biases that inhibit a person’s ability to process political information objectively. This research has important implications for the information literacy movement’s aims of fostering lifelong learning and informed citizenship. This essay argues that information literacy education should broaden its scope to include more than just knowledge of information and its sources; it should also include knowledge of how people interact with information, particularly the ways that motivated reasoning can influence citizens’ interactions with political information.
Article
Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeting patterns for both human users and bots. We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.
Conference Paper
With the rise of social media and advancements in AI technology, human-bot interaction will soon be commonplace. In this paper we explore human-bot interaction in Stack Overflow, a question and answer website for developers. For this purpose, we built a bot emulating an ordinary user answering questions concerning the resolution of git error messages. In a first run this bot impersonated a human, while in a second run the same bot revealed its machine identity. Despite being functionally identical, the two bot variants elicited quite different reactions.
Chapter
Twitter has become a popular element in political campaigns around the world. The posts and interactions of political elites, journalists, and the general public constitute a political communication space. This communication space is deeply interconnected with spaces built not only by media coverage and campaign communication but also following dynamics specific to the technology of the platform and cultural usage practices of its users. To understand these dynamics we first have to understand the usage patterns of politically vocal Twitter users. In this chapter, I present an analysis of Twitter usage patterns by publics, prominent users, and politicians. In general, the findings support the mediation-hypothesis. Highly skewed activity with few users posting many messages and many users posting very few, high dependence on external stimuli, together with distorted levels of political support in favor of the Pirate Party and the Greens all point to Twitter communication about politics being highly mediated. Based on these mediating factors it seems plausible that Twitter data might serve as an indicator of Twitter users’ attention shifts with regard to political information but hold little information on public opinion at large.
Article
What are the fundamental causes of human behavior and to what degree is it intended, consciously controlled? We review the literature on automaticity in human behavior with an emphasis on our own theory of motivated political reasoning, John Q. Public, and the experimental evidence we have collected (Lodge & Taber, 2013). Our fundamental theoretical claim is that affective and cognitive reactions to external and internal events are triggered unconsciously, followed spontaneously by the spreading of activation through associative pathways that link thoughts to feelings to intentions to behavior, so that very early events, even those that are invisible to conscious awareness, set the direction for all subsequent processing. We find evidence in support of four hypotheses that are central to our theory: hot cognition, affect transfer, affect contagion, and motivated bias.
Article
Partisans often perceive real world conditions in a manner that credits their own party. Yet recent findings suggest that partisans are capable of setting their loyalties aside when confronted with clear evidence, for example, during an economic crisis. This study examines a different possibility. While partisans may acknowledge the same reality, they may find other ways of aligning undeniable realities with their party loyalties. Using monthly survey data collected before and after the unexpected collapse of the British national economy (2004-10), this study presents one key finding: As partisans came to agree that economic conditions had gotten much worse, they conversely polarized in whether they thought the government was responsible. While the most committed partisans were surprisingly apt in acknowledging the economic collapse, they were also the most eager to attribute responsibility selectively. For that substantial share of the electorate, partisan-motivated reasoning seems highly adaptive.
Article
We propose a model of motivated skepticism that helps explain when and why citizens are biased-information processors. Two experimental studies explore how citizens evaluate arguments about affirmative action and gun control, finding strong evidence of a prior attitude effect such that attitudinally congruent arguments are evaluated as stronger than attitudinally incongruent arguments. When reading pro and con arguments, participants (Ps) counterargue the contrary arguments and uncritically accept supporting arguments, evidence of a disconfirmation bias. We also find a confirmation bias—the seeking out of confirmatory evidence—when Ps are free to self-select the source of the arguments they read. Both the confirmation and disconfirmation biases lead to attitude polarization—the strengthening of t2 over t1 attitudes—especially among those with the strongest priors and highest levels of political sophistication. We conclude with a discussion of the normative implications of these findings for rational behavior in a democracy.
Article
As a result of the controversy over the dimensionality of the ethos/source credibility construct and the associated plethora of empirical studies in the 1960s and 1970s, Aristotle's dimension of “goodwill” has been dismissed by many contemporary theorists and researchers. It is argued that this occurred as a result of errors made in the earlier empirical research and that “goodwill” can be measured, contrary to earlier claims, and should be restored to its former status in rhetorical communication theory. Empirical research is reported indicating the existence of the goodwill dimension as part of the structure of the ethos/source credibility construct and a measure of that dimension is provided with evidence for its reliability and validity.
Article
consider the motivations governing information processing within the framework of the heuristic-systematic model / this model proposes 2 concurrent modes by which people process information and reach judgments: a relatively effortless heuristic mode, characterized by the application of simple decision rules (e.g., "experts can be trusted"), and a more effortful and analytic systematic mode, in which particularistic or individuating information about objects of judgment is used / which mode predominates in any situation depends on the individual's current motivation and capacity to engage in detailed processing no motivation / accuracy motivation / defense motivation / impression motivation (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The tremendous amount of information available online has resulted in considerable research on information and source credibility. The vast majority of scholars, however, assume that individuals work in isolation to form credibility opinions and that people must assess information credibility in an effortful and time-consuming manner. Focus group data from 109 participants were used to examine these assumptions. Results show that most users rely on others to make credibility assessments, often through the use of group-based tools. Results also indicate that rather than systematically processing information, participants routinely invoked cognitive heuristics to evaluate the credibility of information and sources online. These findings are leveraged to suggest a number of avenues for further credibility theorizing, research, and practice.
Article
Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e., can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public's response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 87.6% in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error by more than 6%.