ArticlePDF Available

Filter bubbles and fake news



The results of the 2016 Brexit referendum in the U.K. and presidential election in the U.S. surprised pollsters and traditional media alike, and social media is now being blamed in part for creating echo chambers that encouraged the spread of fake news that influenced voters.
Filter Bubbles and Fake News
The results of the 2016 Brexit referendum in the U.K. and presidential election in the U.S.
surprised polls and traditional media, and social media is now being blamed in part for
creating echo chambers that encourage the spread of fake news that influenced the voters.
By Dominic DiFranzo and Kristine Gloria-Garcia
The EU referendum in the U.K. and the U.S. presidential election both shocked journalists,
pollsters, and citizens around the world. The outcomes—the U.K. voting to leave the EU and
Donald Trump winning the presidency—raise the question of how traditional media and polls
could have been so wrong in their predictions [1]. While plenty of fingers have pointed to
outside interference, changing demographics, and economic concerns, one scapegoat—social
media—has received special attention. Some critics place the fault with Facebook, Google, and
other social media platforms for allowing the spread of “fake news,” pointing to, for example,
the creation and facilitation of echo chambers, where users are not exposed to outside options
and views [2]. According to The New York Times, following Trump’s victory, executives at
Facebook began to privately chat about the role their company and platform had on the election
[1]. However, Facebook CEO Mark Zuckerberg downplayed the company’s role in the election,
saying, “Voters make decisions based on their lived experience” and the theory that fake news
shared on Facebook “influenced the election in any way, is a pretty crazy idea.” [3].
So we ask, did social media play a role in these election upsets? In this review, we examine
whether social media really played a role, if fake news and filter bubbles had an effect, and if so,
what can be done about it in the future.
Social Media Effect
Social media trends (in terms of, say, numbers of posts, shares, and likes) on the day of the
elections favored both Brexit and Trump, [4] which was counter to the narrative of reputable
polling and traditional media. Trump had more followers across social media platforms. He did,
however, and continues to, push messages using social media rather than traditional media
channels, and had higher engagement rates compared to his opponent. One of the most shared
posts on social media leading to the election was “Why I’m Voting For Donald Trump,” a blog
post by a well-known conservative female blogger [4]. The trend was similar during the EU
Referendum in the U.K. The official “Leave” campaign had more followers and engagement on
social media platforms, as well as more success in spreading pro-leave hashtags and messages
Moreover, according to Pew Research, 61% of millennials use Facebook as their primary source
for political news [6]. Such news stories have been shown to have a direct effect on political
actions, attitudes and outcomes. In 2012, a study reported in Nature described a randomized
controlled trial of political mobilization messages delivered to 61 million Facebook users during
the 2010 U.S. congressional elections. It found the messages directly influenced political self-
expression, information seeking, and real-world voting behavior [7]. This is not surprising, as the
dominant funding strategy of most social media platforms follows the assumption that sponsored
social media posts, or advertisements, can change the buying behavior of their users [8]. In
another example, the past decade has seen a rise in political movements (such as the Arab Spring
and Black Lives Matter) that start on and are sustained through social media platforms like
Twitter and Facebook.
Hampton and Hargittai [9] showed evidence that the demographic most disconnected from social
media and the web were also the most likely to be Trump supporters. Voters without a college
degree supported Trump by a nine-percentage-point margin, whereas in past elections they were
equally likely to support Democrats as Republicans. These gaps widen greatly when pollsters
look at white non-college educated voters, who supported Trump by 39 percentage points, the
largest margin of support from this demographic since 1980 [9]. This group in particular—white
non-college educated—were also most likely to not have access to the Internet, and of those who
do are most likely to not use social media [6].
Additionally, research from Pew Research shows that while millennials (people born 1977 to
1995) get their political news from social media platforms, most Americans still rely on their
local TV news stations and other traditional mass-media sources [6]. These are the same mass-
media sources where Trump received significantly more attention and coverage compared to
Hillary Clinton [6]. And while social-media-savvy millennials overwhelmingly supported
Clinton, voter turnout from this demographic was lower than in both the 2008 and 2012
presidential elections. Further data analysis shows Clinton supporters were most likely to be
engaged on social media platforms like Twitter and Reddit [9].
Why then did a first pass look at social data trends show more support for Trump than Clinton if
social media users seemed to support Clinton more? The answer is still being investigated, but
one answer may be in the use of botnets. Two recent studies—one from researchers at the
University of Southern California and another from Oxford University, the University of
Washington, and Corvinus University of Budapest—both showed AI-controlled bots were
spreading pro-Trump content in overwhelming numbers. Kollanyi, Howard and Woolley [11]
from Oxford University estimate that one-third of pro-Trump tweets came from automated bots,
which they classified by how often these accounts tweet, at what time of day, and their relation
to other accounts. This created the illusion of more support for Trump on Twitter than there may
have been naturally [10,11]. Kollanyi, Howard and Woolley [11] noted similar automated
patterns on Twitter leading up to the EU referendum in the U.K. in which pro-Leave tweets
greatly outnumbered pro-Stay tweets.
Filter Bubble
Another criticism of social media is that it constructs “filter bubbles,” digital echo chambers
where users see content and posts that agree only with their preexisting beliefs [12]. While there
is an active dialogue taking place as to whether filter bubbles exist, here, we highlight work that
explores whether it contributed to the 2016 election results.
In 2015, Facebook funded a study that showed that while Facebook’s own newsfeed algorithm
might favor posts that support a user’s political beliefs, the related filter-bubble effect is due to
the user’s network and past engagement behavior (such as a clicking only on certain news
stories); that is, it is not the fault of the newsfeed algorithm but the choices of users themselves.
They also found that this favoritism effect is small overall. The study showed users are only 6%
less likely to see a post that conflicts with their political views when compared to an unfiltered
newsfeed [13].
Personal recommendation systems, or systems that learn and react to individual users, have been
claimed to be one cause of filter bubbles [12]. Other studies have shown that personalized
recommendations can actually expose users to content they might not have found otherwise [14]
and that personalized recommendations are not used as extensively as once thought [15]. A 2011
national survey by Pew Research found Facebook use is actually correlated with knowing and
interacting with a greater variety people from different backgrounds and demographics [16]. This
correlation persists despite controlling for the demographic characteristics of Facebook users
compared the U.S. population as a whole. In a sense, social media may actually be bursting filter
bubbles. This same survey showed that people who are offline are more likely to be socially
isolated and have less diverse social relationships, thereby being exposed to less-diverse ideas
and viewpoints [16].
While these studies are compelling, evidence of filters bubbles and their effect on users
continues to grow. For example, several scholars have criticized the 2015 Facebook Newsfeed
Study cited earlier. Specifically, Zeynep Tufekci rebutted many of the findings and methodology
of the study [17], accusing the study of underplaying its most important conclusion indicating the
newsfeed algorithm decides placement of posts and this placement greatly influences what users
click and read. Tufekci also highlighted that the sampling was not random and thus cannot be
generalized across all Facebook users. Even if one takes the Facebook study at face value, it still
shows this filter bubble effect is real and the algorithm actively suppresses posts that conflict
with a user’s political viewpoint. Other recent studies (such as Del Vicario et al. [18] on the
sharing of scientific and conspiracy stories on Facebook) found evidence of the formation of
echo chambers that cause “confusion about causation, and thus encourage speculation, rumors,
and mistrust.”
Fake News
On the theme of “speculation, rumors, and mistrust,” fake news is another issue that has plagued
social media platforms during, as well as after, the U.S. elections. Fake news is a recent popular
and purposefully ambiguous term for false news stories that are packaged and published as if
they were genuine. The ambiguity of this term—an inherent property of what it tries to label—
makes its use attractive across the political spectrum, where information that conflicts with an
ideology can be labeled “fake.” The Times published an article [19] chronicling the spread of
fake news on social media, saying one such fake story was shared at least 16,000 times on
Twitter and more than 350,000 times on Facebook. According to BuzzFeed [20], in the months
before the U.S. elections, fake news stories on Facebook actually outperformed real news from
mainstream news outlets. BuzzFeed [20] said these fake news stories overwhelmingly favored
Trump. For example, a fake news story reported that Pope Francis endorsed Trump and was
shared more than one million times on social media feeds. Pope Francis, an advocate for
refugees, made no such endorsement. Not only were these fake news sources shared on social
media platforms they were also shared by Trump and members of his campaign [9].
Fake news stories have a real effect offline as well. A shooting took place in a Washington, D.C.,
pizzeria, Comet Ping Pong, after fake news stories and conspiracy theories spread about it being
part of a child trafficking ring [21]. Army Lt. Gen. Michael Flynn, Trump’s current National
Security Adviser shared fake news stories related to this so-called “Pizzagate” scandal more than
16 times, according to a Politico review of his Twitter posts [22].
Although fake news may be a problem (though not unique to social media), its dissemination
may not break out of the filter bubble at its point of origin. Several studies have shown the
spread of fake news is similar to epidemics compared to real news stories and that such stories
usually stay within the same communities [23]; that is, these stories tend to not reach or convince
outsiders. Likewise, Mark Zuckerberg said in a Facebook post following the U.S. election, “Of
all the content on Facebook, more than 99% of what people see is authentic. Only a very small
amount is fake news and hoaxes. Overall, this makes it extremely unlikely hoaxes changed the
outcome of this election in one direction or the other” [3]. He did not provide any data or
evidence to back this claim.
As the dust continues to settle from the elections, research into the role of social media and
digital media platforms as key influencers will continue. We acknowledge how unlikely it is
analysts will ever reach a commonly accepted explanation for the election outcomes. However, it
is imperative to acknowledge the need to study such potential cause and effects. This has laid out
the potential research questions; now, we turn to possible solutions.
Future Solutions
Even though fake news and filter bubbles are a problem that indeed affected the U.S. presidential
election, social media platforms like Facebook and Google are exploring ways to reduce these
influences on their platforms. Both Google and Facebook recently, November 2016, announced
the banning of websites that publish fake news from their advertising networks, effectively
killing the revenue stream of these sites [24]. Facebook has also created new tools to flag fake
content and is partnering with third-party fact-checking organizations like Snopes and Politifact
[25]. Facebook is also developing better automatic fake-news-detection systems that will limit
the spread of such content.
Researchers and software developers have been looking into tools to help break out of filter
bubbles [26], including filtering algorithms and user interfaces that give users better control and
allow more diversity. Other tools (such as browser plugin Ghostery and search engine
DuckDuckGo) are being developed to help anonymize users actions online, thus disabling
personalized recommendations.
Bot and spam detection is another major area of research. Many social media platforms already
use a range of tools, from machine learning to social network analysis, to detect and stop bots.
Independent groups and researchers have also developed tools to detect bots; for example,
researchers at Indiana University have developed BotOrNot (,
a service that allows anyone to check if a particular Twitter user is a bot.
Difficult Questions
In addition to technical enhancements and design choices, what other avenues, even public
policymaking, are available for combating these issues? This may be a particularly difficult
question in the U.S. due to free-speech protections under the First Amendment of the U.S.
Constitution. We already see legal and political tensions as Twitter implements internal policies
for flagging hate speech and closing specific accounts. Others have suggested a reinstatement of
media- and civic-literacy initiatives to help users discern trustworthy, genuine news sources.
These issues of fake news and filter bubbles are vague, nuanced and pre-date social media, with
no easy solution, but it is vital that researchers continue to explore and investigate them from
diverse technical and social perspectives. Their skills, knowledge, and voices are needed now
more than ever, and for the good of all, to address them.
[1] Isaac, M. Facebook, in cross hairs after election, is said to question its influence. The New
York Times (Nov. 12, 2016);
[2] Isaac, M. and Ember, S. For election day influence, Twitter ruled social media. The New York
Times (Nov. 8, 2016);
[3] Kokalitcheva, K. Mark Zuckerberg says fake news on Facebook affecting the election is a
‘crazy idea.’ Fortune (Nov. 11, 2016);
[4] El-Bermawy, M.M. Your filter bubble is destroying democracy. Wired (Nov. 18, 2016);
[5] Sigdyal, P. and Wells, N. At least on Twitter, the ‘leave’ Britain vote has the edge Twitter
users scream ‘leave’ in Brexit vote, but ‘remain’ gains ground (June 23, 2016);
[6] Mitchell, A., Gottfried, J., and Matsa, K.E. Facebook top source for political news among
millennials. Pew Research Center (June 1, 2015);
[7] Bond, R.M., Fariss, C.J., Jones, J.J., Kramer, A.D.I., Marlow, C., Se le, J.E., and Fowler, J.H.
A 61-million-person experiment in social influence and political mobilization. Nature 489, 7415
(Sept. 2012), 295–298.
[8] Taylor, D.G., Lewin, J.E., and Strutton, D. Friends, fans, and followers: Do ads work on
social networks? Journal of Advertising Research 51, 1 (2011), 258–275.
[9] Hampton, K. and Hargittai, E. Stop blaming Facebook for Trump’s election win. The Hill
(Nov. 2016); blogs/pundits-blog/presidential-campaign/307438-stop-blaming-
[10] Fields, J., Sengupta, S., White, J., Spetka, S. et al. Botnet Campaign Detection on Twitter.,
Master of Science Thesis in Computer and Information Sciences, Department of Computer
Sciences, SUNY Polytechnic Institute, Utica, NY, 2016;
[11] Kollanyi, B., Howard, P.N. and Woolley, S.C. Bots and automation over Twitter during the
third U.S. presidential debate. Political Bots (Oct. 31 2016);
[12] Pariser, E. The Filter Bubble: How the New Personalized Web Is Changing What We Read
and How We Think. Penguin, New York, 2011.
[13] Bakshy, E., Messing, S., and Adamic, L. Exposure to ideologically diverse news and
opinion on Facebook. Science 348, 6239 (2015), 1130–1132.
[14] Hosanagar, K., Fleder, D., Lee, D., and Buja, A. Will the global village fracture into tribes?
Recommender systems and their effects on consumer fragmentation. Management Science 60, 4
(2013), 805–823.
[15] Weisberg, J. Bubble trouble: Is web personalization turning us into solipsistic twits. Slate
(June 10 2011),
[16] Hampton, K., Sessions Goulet, L., Rainie, L., and Purcell, K. Social networking sites and
our lives. Pew Research (2011)
[17] Tufekci, Z. How Facebook’s algorithm suppresses content diversity (modestly) and how the
newsfeed rules the clicks. Medium (2015);
[18] Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, E., and
Quattrociocchi, W. The spreading of misinformation online. Proceedings of the National
Academy of Sciences 113, 3 (2016), 554–559.
[19] Maheshwari, S. How fake news goes viral: A case study. The New York Times (Nov. 20
[20] Silverman, C. This analysis shows how viral fake election news stories outperformed real
news on Facebook. BuzzFeed (Nov. 16, 2016);
[21] Kang, C. Fake news onslaught targets pizzeria as nest of child-trafficking. The New York
Times (Nov. 21, 2016);
[22] Bender, B. and Hanna, A. Flynn under fire for fake news. Politico (Dec. 5, 2016);
[23] Jin, F., Dougherty, E., Saraf, P., Cao, Y., and Ramakrishnan, N. Epidemiological modeling
of news and rumors on Twitter. In Proceedings of the Seventh Workshop on Social Network
Mining and Analysis (2013). ACM Press, New York, Article No. 8.
[24] Kottasova, I. Facebook and Google to stop ads from appearing on fake news sites. CNN
(Nov. 15, 2016);
[25] Heath, A. Facebook is going to use Snopes and other fact-checkers to combat and bury ‘fake
news.’ Business Insider (Dec. 15, 2016);
[26] Resnick, P., Kelly Garrett, R., Kriplean, T., Munson, S.A., and Jomini Stroud, N. Bursting
your (filter) bubble: Strategies for promoting diverse exposure. In Proceedings of the 2013
Conference on Computer Supported Cooperative Work (San Antonio, TX, Feb. 23–27). ACM
Press, New York, 2013, 95–100.
Dominic DiFranzo is a post-doctoral associate in the Social Media Lab at Cornell
University, Ithaca, NY; he holds a Ph.D. in computer science from the Rensselaer
Polytechnic Institute, Troy, NY, and was a member of the Tetherless World
Kristine Gloria-Garcia joined the Aspen Institute Communications and Society Program
as a project manager in September 2016; previously, she served as a visiting
researcher at the Internet Policy Research Initiative at MIT, Cambridge, MA, and as a
privacy research fellow at the Startup Policy Lab. She holds a Ph.D. in cognitive science
from Rensselaer Polytechnic Institute, Troy, NY, and a master’s in media studies from
the University of Texas at Austin.
pull quotes
pick 2 of 3??
4 Why then did a first pass look by pollsters?? at social data trends show more support for
Trump than Clinton if social media users seemed to support Clinton more?
5 Personal recommendation systems, or systems that learn and react to individual users, have
been argued claimed?? to be one cause of filter bubbles.
9 We already see legal and political?? tensions as Twitter implements internal policies for
flagging hate speech and closing specific accounts.
... Indeed, Google's search results, Facebook's newsfeed, or video recommendations from YouTube are all highly personalized and interconnected choice environments and are likely capable of nudging users in one direction or another. In this context, several studies have been investigating how potential positive feedback loops leading to flter bubbles can be reduced [33]. On social media timelines, personalized nudges, which display relevant information more prominently, lead users to interact with it more. ...
... In the end, a so-called flter bubble is created. In some cases, where the flter bubbles become problematic in terms of disinformation, social media platforms might use a warning nudge to break the user out of the vicious circle [33]. Another example, for the same purpose, is to present a feedback nudge to a user showing the balance of opinion on their timeline to raise awareness [101]. ...
... In this scenario, the reader receives interesting content, which is a good thing. However, social media has been criticised for creating digital echo chambers in which users see content and posts that only agree with their existing beliefs [24]. This content, which may also be false, reinforces the beliefs and opinions an audience already holds while not allowing them to see that alternatives exist [23]. ...
... In 2015, Facebook conducted a study that showed that although this social network's internal algorithm can select posts that confirm a user's political beliefs, the effect of filter bubbles is mainly due to the user's behaviour, such as how they click and search for specific content of interest. This shows that it is mainly the choices of the users themselves that creates this bubble rather than simply Facebook's algorithms [24]. ...
Full-text available
Social media is now the primary form of communication between internet users and has soared in popularity, which has directly impacted the spread of the phenomenon of fake news. Fake news is not only a widespread phenomenon; it is also problematic and dangerous for society. The aim of this study is to understand the phenomenon of fake news better. The study utilised a structural modelling equation in order to identify how Polish society perceives the problem of fake news and assess the extent to which it trusts content that is published on the internet. The key goal was to determine what factors have the most significant influence on the verification of information being viewed on the internet. By deploying the partial least squares method of validation, SmartPLS3 software was used to process the survey results. The strongest positive effect on information verification behaviour was found to be fake news awareness, which was followed by the intention to share information. The research did not consider any clear connections that may exist between the nature of fake news and its recipient; however, much of the fake news that appears on the internet is political in nature. The study can be used by news reporting companies and provides preliminary information for developers responsible for running social media sites as well as users who want to combat and limit the spread of fake news online. This study expands on the available literature related to fake news by identifying the effects on information verification behaviour of fake news awareness and the intention to share data.
... Surprisingly, this question has received little attention. Most prior work defaults to quantifying polarization based on the overall variance of societal opinions (opinions are typically encoded as real valued numbers) [8,18,25]. While mathematically convenient, any variance-based approach faces a basic challenge: standard models of opinion formation in social networks, like the ubiquitous DeGroot learning model [16], predict gradual convergence of opinion variance towards zero over time. ...
Full-text available
It is widely believed that society is becoming increasingly polarized around important issues, a dynamic that does not align with common mathematical models of opinion formation in social networks. In particular, measures of polarization based on opinion variance are known to decrease over time in frameworks such as the popular DeGroot model. Complementing recent work that seeks to resolve this apparent inconsistency by modifying opinion models, we instead resolve the inconsistency by proposing changes to how polarization is quantified. We present a natural class of group-based polarization measures that capture the extent to which opinions are clustered into distinct groups. Using theoretical arguments and empirical evidence, we show that these group-based measures display interesting, non-monotonic dynamics, even in the simple DeGroot model. In particular, for many natural social networks, group-based metrics can increase over time, and thereby correctly capture perceptions of increasing polarization. Our results build on work by DeMarzo et al., who introduced a group-based polarization metric based on ideological alignment. We show that a central tool from that work, a limit analysis of individual opinions under the DeGroot model, can be extended to the dynamics of other group-based polarization measures, including established statistical measures like bimodality. We also consider local measures of polarization that operationalize how polarization is perceived in a network setting. In conjunction with evidence from prior work that group-based measures better align with real-world perceptions of polarization, our work provides formal support for the use of these measures in place of variance-based polarization in future studies of opinion dynamics.
This chapter analyses examples of conflicts in standards that may illustrate the need for digital platforms to adopt a more proactive approach to corporate social responsibility (CSR). It asks what changes platform media would need to make to ‘take responsibility’ in the digital landscape., and undertaken an exploration of existing regulatory approaches and analyses of practice, to propose an enhanced vision of digital corporations’ application of CSR to benefit individuals and societies. The suggestion is that digital platforms be constructed as publishers of information, requiring them to be accountable for material carried on their sites.
Aufgrund der Verbreitung sozialer Medien haben sich öffentliche Diskurse in digitale Räume verlagert, wobei vor allem die Kommentarfunktionen digitaler Plattformen für Diskussionen genutzt werden. Es bestehen jedoch wenig Kenntnisse über das Vorhandensein fachlicher Bezüge in Kommentaren sowie zugrundeliegende Argumentationsstrukturen. Im vorliegenden Buchkapitel werden daher im Rahmen einer Fallstudie die Kommentare (N = 25.506) eines prominenten Videos auf der Plattform YouTube (Harald Lesch: „Missverständnisse zum Klimawandel aufgeklärt“) analysiert. Zwar war „CO2“ in allen Kommentaren mit Abstand das häufigste Wort, doch deuteten die Ergebnisse einer qualitativen Inhaltsanalyse einer Teilstichprobe von Kommentaren (n = 1021) an, dass nur ein relativ kleiner Teil der Kommentare überhaupt fachliche Bezüge aufwies (n = 206; 20,2 % der 1021 Kommentare). Während auch hier CO2 als häufigste inhaltliche Kategorie codiert wurde (n = 109; 10,7 % der 1021 Kommentare), zeigte sich in einem Vergleich von zwei Teilstichproben aus Kommentaren mit vielen (n = 521) und wenigen Likes (n = 500), dass vor allem Kategorien ohne fachlichen Bezug signifikant häufiger im Sample mit vielen Likes codiert wurden. Um die Verläufe fachlicher Diskurse innerhalb der Kommentare zu systematisieren, werden diese abschließend in einer Diskurskarte dargestellt. In Übereinstimmung mit der Literatur zeigt die empirische Analyse beispielhaft auf, wie kontrovers das Thema unter den Nutzenden von sozialen Medien diskutiert und dabei ebenfalls der wissenschaftliche Konsens zu den anthropogenen Einflüssen auf den Klimawandel angezweifelt wird. Bezugnehmend auf das Verständnis der Natur der Naturwissenschaften (engl. „Nature of Science“) wird abschließend die Rolle von sozialen Medien als Ort für informelles Lernen zum Klimawandel diskutiert.
Since 2014, disinformation has been a major issue in Indonesia's digital political environment. This chapter seeks to map the complex landscape of disinformation in Indonesia, to illustrate how disinformation has happened and affected those most vulnerable in political decision-making. Contributing to the growing debate on disinformation in digital electoral politics, it traces the types and actors of disinformation, as well as its implications for political polarization in Indonesia. Studies of the link between disinformation and digital politics in Indonesia's transitional democracy have remained limited, and most previous studies have focused on the rapid dissemination of hoaxes and/or digital politics within the context of specific issues or elections. Such studies have yet to trace the development of political disinformation over time. Social media provide all political stakeholders – state officials, politicians, and ordinary voters – a certain degree of freedom of speech.
Fake news is a threat to society. A huge amount of fake news is posted every day on social networks which is read, believed and sometimes shared by a number of users. On the other hand, with the aim to raise awareness, some users share posts that debunk fake news by using information from fact-checking websites. In this paper, we are interested in exploring the role of various psycholinguistic characteristics in differentiating between users that tend to share fake news and users that tend to debunk them. Psycholinguistic characteristics represent the different linguistic information that can be used to profile users and can be extracted or inferred from users’ posts. We present the CheckerOrSpreader model that uses a Convolution Neural Network (CNN) to differentiate between spreaders and checkers of fake news. The experimental results showed that CheckerOrSpreader is effective in classifying a user as a potential spreader or checker. Our analysis showed that checkers tend to use more positive language and a higher number of terms that show causality compared to spreaders who tend to use a higher amount of informal language, including slang and swear words.
We explore a hidden feedback loops effect in online recommender systems. Feedback loops result in degradation of online multi-armed bandit (MAB) recommendations to a small subset and loss of coverage and novelty. We study how uncertainty and noise in user interests influence the existence of feedback loops. First, we show that an unbiased additive random noise in user interests does not prevent a feedback loop. Second, we demonstrate that a non-zero probability of resetting user interests is sufficient to limit the feedback loop and estimate the size of the effect. Our experiments confirm the theoretical findings in a simulated environment for four bandit algorithms.
Conference Paper
Full-text available
"Fake news" has become a common buzzword in public, political, and scientific debates. Whereas the definition of the term and its political consequences are often highlighted, this paper seeks to provide an overview of the development, the most common dimensions of fake news, and their mode of action. Research shows that fake news can trigger and act in conjunction with numerous effects that influence recipients. A comprehensive overview of these effects is given in this paper.
Full-text available
The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web (WWW) also allows for the rapid dissemination of unsubstantiated rumors and conspiracy theories that often elicit rapid, large, but naive social responses such as the recent case of Jade Helm 15--where a simple military exercise turned out to be perceived as the beginning of a new civil war in the United States. In this work, we address the determinants governing misinformation spreading through a thorough quantitative analysis. In particular, we focus on how Facebook users consume information related to two distinct narratives: scientific and conspiracy news. We find that, although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, cascade dynamics differ. Selective exposure to content is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., "echo chambers." Indeed, homogeneity appears to be the primary driver for the diffusion of contents and each echo chamber has its own cascade dynamics. Finally, we introduce a data-driven percolation model mimicking rumor spreading and we show that homogeneity and polarization are the main determinants for predicting cascades' size.
Full-text available
Exposure to news, opinion and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using de-identified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks, and examine the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook's algorithmically ranked News Feed, and further studied users' choices to click through to ideologically discordant content. Compared to algorithmic ranking, individuals' choices about what to consume had a stronger effect limiting exposure to cross-cutting content. Copyright © 2015, American Association for the Advancement of Science.
Full-text available
Broadcast media are declining in their power to decide which issues and viewpoints will reach large audiences. But new information filters are appearing, in the guise of recommender systems, aggregators, search engines, feed ranking algorithms, and the sites we bookmark and the people and organizations we choose to follow on Twitter. Sometimes we explicitly choose our filters; some we hardly even notice. Critics worry that, collectively, these filters will isolate people in information bubbles only partly of their own choosing, and that the inaccurate beliefs they form as a result may be difficult to correct. But should we really be worried, and, if so, what can we do about it? Our panelists will review what scholars know about selectivity of exposure preferences and actual exposure and what we in the CSCW field can do to develop and test ways of promoting diverse exposure, openness to the diversity we actually encounter, and deliberative discussion.
Conference Paper
Full-text available
Characterizing information diffusion on social platforms like Twitter enables us to understand the properties of underlying media and model communication patterns. As Twitter gains in popularity, it has also become a venue to broadcast rumors and misinformation. We use epidemiological models to characterize information cascades in twitter resulting from both news and rumors. Specifically, we use the SEIZ enhanced epidemic model that explicitly recognizes skeptics to characterize eight events across the world and spanning a range of event types. We demonstrate that our approach is accurate at capturing diffusion in these events. Our approach can be fruitfully combined with other strategies that use content modeling and graph theoretic features to detect (and possibly disrupt) rumors.
Full-text available
Human behaviour is thought to spread through face-to-face social networks, but it is difficult to identify social influence effects in observational studies, and it is unknown whether online social networks operate in the same way. Here we report results from a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 US congressional elections. The results show that the messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people. Furthermore, the messages not only influenced the users who received them but also the users' friends, and friends of friends. The effect of social transmission on real-world voting was greater than the direct effect of the messages themselves, and nearly all the transmission occurred between 'close friends' who were more likely to have a face-to-face relationship. These results suggest that strong ties are instrumental for spreading both online and real-world behaviour in human social networks.
Full-text available
Personalization is becoming ubiquitous on the World Wide Web. Such systems use statistical techniques to infer a customer’s preferences and recommend content best suited to him (e.g., “Customers who liked this also liked…”). A debate has emerged as to whether personalization has drawbacks. By making the web hyper-specific to our interests, does it fragment internet users, reducing shared experiences and narrowing media consumption? We study whether personalization is in fact fragmenting the online population. Surprisingly, it does not appear to do so in our study. Personalization appears to be a tool that helps users widen their interests, which in turn creates commonality with others. This increase in commonality occurs for two reasons, which we term volume and product mix effects. The volume effect is that consumers simply consume more after personalized recommendations, increasing the chance of having more items in common. The product mix effect is that, conditional on volume, consumers buy a more similar mix of products after recommendations.
Social-networking sites (SNS) such as Facebook and Twitter are growing in both popularity and number of users. For advertisers and the sites themselves, it is crucial that users accept advertising as a component of the SNS. Anecdotal evidence indicates that social-networking advertising (SNA) can be effective when users accept it, but the perception of excessive commercialization may lead to user abandonment. Empirical support for these propositions, however, is lacking. Based on media uses and gratification theory, the authors propose and empirically test a model of content-related, structural, and socialization factors that affect users' attitudes toward SNA.
For election day influence Twitter ruled social media. The New York Times
  • M Isaac
  • S Ember
Isaac, M. and Ember, S. For election day influence, Twitter ruled social media. The New York Times (Nov. 8, 2016);
Mark Zuckerberg says fake news on Facebook affecting the election is a 'crazy idea
  • K Kokalitcheva
Kokalitcheva, K. Mark Zuckerberg says fake news on Facebook affecting the election is a 'crazy idea.' Fortune (Nov. 11, 2016);
Facebook top source for political news among millennials
  • A Mitchell
  • J Gottfried
  • K E Matsa
Mitchell, A., Gottfried, J., and Matsa, K.E. Facebook top source for political news among millennials. Pew Research Center (June 1, 2015);