ArticlePDF Available

Abstract and Figures

Extreme polarization of opinions fuels many of the problems facing our societies today, from issues on human rights to the environment. Social media provides the vehicle for these opinions and enables the spread of ideas faster than ever before. Previous computational models have suggested that significant external events can induce extreme polarization. We introduce the Social Opinion Amplification Model (SOAM) to investigate an alternative hypothesis: that opinion amplification can result in extreme polarization. SOAM models effects such as sensationalism, hype, or “fake news” as people express amplified versions of their actual opinions, motivated by the desire to gain a greater following. We show for the first time that this simple idea results in extreme polarization, especially when the degree of amplification is small. We further show that such extreme polarization can be prevented by two methods: preventing individuals from amplifying more than five times, or through consistent dissemination of balanced opinions to the population. It is natural to try and have the loudest voice in a crowd when we seek attention; this work suggests that instead of shouting to be heard and generating an uproar, it is better for all if we speak with moderation.
This content is subject to copyright. Terms and conditions apply.
1
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports
Opinion amplication causes
extreme polarization in social
networks
Soo Ling Lim
1* & Peter J. Bentley
1,2
Extreme polarization of opinions fuels many of the problems facing our societies today, from issues
on human rights to the environment. Social media provides the vehicle for these opinions and
enables the spread of ideas faster than ever before. Previous computational models have suggested
that signicant external events can induce extreme polarization. We introduce the Social Opinion
Amplication Model (SOAM) to investigate an alternative hypothesis: that opinion amplication can
result in extreme polarization. SOAM models eects such as sensationalism, hype, or “fake news” as
people express amplied versions of their actual opinions, motivated by the desire to gain a greater
following. We show for the rst time that this simple idea results in extreme polarization, especially
when the degree of amplication is small. We further show that such extreme polarization can be
prevented by two methods: preventing individuals from amplifying more than ve times, or through
consistent dissemination of balanced opinions to the population. It is natural to try and have the
loudest voice in a crowd when we seek attention; this work suggests that instead of shouting to be
heard and generating an uproar, it is better for all if we speak with moderation.
Polarization of opinions on social networks is increasingly evident today, with highly contrasting beliefs being
shared in politics, environmental and social issues. e likely repercussions of polarization in our societies are
well established, from damaging the democratic process to a decrease in tolerance for others1. With so much
at stake, the use of computational models to understand the causes of polarization is receiving more attention.
When using models to study social inuences, it is common for opinions to converge towards a consensus,
or fragment into two or more clusters2. In both cases, nal opinions fall within the initial range of opinions. Yet
social media breeds not just polarized opinions, but extreme opinions, that might otherwise be considered outliers
in population norms. us in our models, a key outcome to study is when individuals inuence each other such
that opinions diverge towards extremes that are outside their initial range of opinions.
Synchronized external events have been shown to be a possible cause of polarization2. However, it is possible
that polarization happens gradually in the network without needing external intervention. In social media, it
is common for people to amplify what they actually feel about something, in order to attract attention, because
the more extreme a post, the more popular it is1,3. We use the term opinion amplication to encompass the range
of behaviors by users that may distort the original opinion with a more positive or negative sentiment. Such
behaviors include making unfounded assumptions, making generalizations or summaries, selectively quoting,
editorializing, or misunderstanding3.
Opinion amplication may happen at a low level at all times, but it proliferates across a network once a topic
is trending4. Here we look at opinion amplication as potential cause for polarization, focusing specically on
extreme polarization. is diers fundamentally from previous work as previous models investigated polarization
that results from interactions between individuals, and more recently events that are external to the individu-
als themselves; in this work we argue that noise fails to model this crucial phenomenon. We hypothesize that
opinions can become biased towards a more positive or negative sentiment through opinion amplication. We
propose the Social Opinion Amplication Model (SOAM) to investigate these ideas. We remove well-studied
variables such as noise from the model in order to identify the minimum features needed to create extreme
polarization in a population through opinion amplication.
OPEN
1Department of Computer Science, University College London, London, UK. 2Autodesk Research, London,
UK. *email: s.lim@cs.ucl.ac.uk
Content courtesy of Springer Nature, terms of use apply. Rights reserved
2
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Background
e inexorable draw of expressing extreme sentiments may rst have emerged in conventional media. It is well
known that health research claims become exaggerated in press releases and news articles, with more than 50%
of the press releases from certain universities exaggerated, and some news agencies exaggerating about 60% of
the articles they publish5. is is “spin”, dened as specic reporting strategies, intentional or unintentional, that
can emphasize the benecial eect of the experimental treatment6. “Sensationalism” is a close bedfellow when
reporting general topics—a discourse strategy of “packaging” information in news headlines in such a way that
news items are presented as more interesting, extraordinary and relevant7. ese practises became ever more
prevalent as the competition for online customers increased, becoming rened into new genres such as “click-
baits”—nothing but amplied headlines designed to lure the readers to click on the link8, and hype in online
reviews, where the hyped review is always absolute positive or negative9.
As conventional media has transitioned to social media, today everyone is a “media outlet”, so the lure of
attention-seeking behavior is now felt by individuals. Inuencers have used beauty lters to make the products
they are advertising appear more eective, resulting in warnings by Advertising Standards Agency (ASA)10. Young
people aged 11–18 were observed to exaggerate their behaviors as they aimed to live up to amplied claims about
popularity11. Even the accidental use of certain words can make readers believe causal relationships that may not
exist12. is is also known as sentiment polarity, an important feature in fake news—in order to make their news
persuasive, authors oen express strong positive or negative feeling in the content13,14. e result is that bizarre
conspiracy theories that might once have been the domain of a tiny minority are now routinely given the same
credence as evidence-backed science by large portions of the population15.
It is infeasible to perform experiments on real human populations in order to understand causation of
extreme polarization. Computational models provide an essential tool to overcome this empirical limitation.
Computational models have been used for decades to study opinion dynamics, with early works oen focused on
consensus formation1618. Deuant etal. developed a model of opinion dynamics where convergence of opinions
into one average opinion and convergence into multiple opinion clusters are observed19. eir model consists of
a population of
N
agents
i
with continuous opinions
xi
. At each timestep, two randomly chosen agents “meet
and they re-adjust their opinion when their dierence of opinion is smaller in magnitude than a threshold
ε
.
Suppose that the two agents have opinion
x
and
x
and that
xx
, opinions are then adjusted according to:
where
is the convergence parameter taken between 0 and 0.5 during the simulations. Deuant etal. found that
the value of
ε
is the main inuencer on the dynamics of the model, when it is high, convergence into one opinion
occurs, and when it is low, polarization/fragmentation occurs (convergence into multiple opinions)19.
and
N
only inuence convergence time and the distribution of nal opinions. ey applied their model to a social net-
work of agents, whereby any agent in the model can only interact with 4 connected neighbors on a grid (so that
the random selection of agents to interact can only come from connected neighbors) and found the same results.
Hegselmann and Krause developed a model with bounded condence to investigate opinion fragmentation
in which consensus and polarization are special cases20. e Hegselmann-Krause (HK) model is dened as:
where
I
(i,x)={1jn|
x
i
x
j
ε
i}
and
εi0
is the condence level of agent
i
. Agent
i
takes only those
agents
j
into account whose opinions dier from his own by not more than
εi
. e base case assumes a uniform
level of condence, i.e.,
εi=ε
for all agents
i
. e authors found that higher values of condence threshold
ε
lead
to consensus, while lower values lead to polarization and fragmentation. In all their runs, regardless of whether
consensus or polarization occurs, the range of opinions decreases as the simulation runs.
Fu etal. modied the HK model by dividing the population into open-minded, moderate-minded and closed-
minded agents21. ey found that the number of nal opinion clusters is dominated by the closed-minded agents;
open-minded agents cannot contribute to forming opinion consensus and the existence of open-minded agents
may diversify the nal opinions.
Cheng and Yu suggested that in many social situations, an individual’s opinion formation and expression
may be dierent because the individual feels pressured to express an opinion similar to the public opinion in the
group22. ey propose a bounded condence plus group pressure model, in which each individual forms an inner
opinion relative to the bound of condence and expresses an opinion, taking group pressure into consideration. A
group with all individuals facing group pressure always reach a consensus. In a mixed group with both pressured
and non-pressured individuals, the consensus threshold ε is signicantly reduced, and group pressure does not
always help to promote consensus; although similar to other models, in their work polarization does not occur.
Most recently, Condie and Condie classied social inuence into assimilative and dierentiative2. Assimilative
inuence occurs when opinions converge towards a consensus, or fragment into two or more converging clusters,
all within the initial range of opinions. Dierentiative inuence—the focus of our work—occurs when individuals
with very dissimilar opinions can inuence each other causing divergence towards extreme opinions (see Fig.1).
Condie and Condie proposed the Social Inuence and Event Model (SIEM)2 which builds on the HK bounded
condence model, with
ε
as condence threshold, with the following main dierences: (1) agents form a social
network, (2) an individual
i
will only change their opinion if their certainty,
Cj
,
t∈[0, 1]
, is less than the average
certainty of other individuals with which they interact at time
t
, (3) most importantly, events can inuence many
individuals synchronistically over a limited period of time. Events can have a large impact on the distribution
of opinions because their inuence acts synchronistically across a large proportion of the population, whereas
(1)
x
=x+µ·
x
x
and x
=x
+µ·
x
x
(2)
x
i(t+1)=|I(i,x(t))|
1
j
I(i
,
x(t))
xj(t)for t
T
Content courtesy of Springer Nature, terms of use apply. Rights reserved
3
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
an individual can only interact with small numbers of other individuals at any particular time. e simulation
results showed that SEIM without events exhibited the range of behaviors generated by other inuence models
under diering levels of condence threshold
ε
leading to consensus (or assimilative inuence in their deni-
tion). With the presence of strong events, when the condence threshold
ε
is high (low homophily), opinions
swing between extremes, and when the condence threshold
ε
is low (high homophily), opinions diverged into
extremes. Condie and Condie2 also introduced a measure of conict,
Ot=SD(Oi,t)
, in the population which
they dened as the standard deviation of individual opinions
Oi,t
across the population at timestep
t
.
Building further on these ideas, Macy etal. used a general model of opinion dynamics to demonstrate the
existence of tipping points, at which even an external threat, such as a global pandemic, economic collapse,
may be insucient to reverse the self-reinforcing dynamics of partisan polarization23. Agents in the model have
initially random locations in a multidimensional issue space consisting of membership in one of two equal-
sized parties and positions on 10 issues. Agents then update their issue positions by moving closer to nearby
neighbors and farther from those with whom they disagree, depending on the agents’ tolerance of disagreement
and strength of party identication compared to their ideological commitment to the issues. ey manipulated
agents’ tolerance for disagreement and strength of party identication, and introduced exogenous shocks that
corresponds to events (following Condie and Condie2) that create a shared interest against a common threat
(e.g., a global pandemic).
ese works all demonstrate the value of this form of modelling to explore opinion dynamics, while assuming
expressed opinion is same as actual opinion.
Methods
As explored in the previous section, people may express more extreme opinions on social media compared to
their own internal beliefs, and we hypothesize that this may cause inuence across the population towards a
more positive or negative sentiment. We incorporate this notion of an agent presenting an amplied version of
their own opinion in our model, which is built on the Hegselmann–Krause (HK) bounded condence model
of opinion formation.
e Social Opinion Amplication Model(SOAM) consists of a network of individuals, where individuals
can inuence other individuals that they are connected to on the social network in relation to a specic issue.
Opinions are continuous and individuals inuence each other in each timestep.
e key innovation in our model is the concept of an expressed opinion, which for individuals who have a
tendency to amplify, is stronger than the individuals actual opinion. is is backed up by early theories that
online opinion expression does not necessarily reect an individual’s actual opinion24 and recent literature that
people actually express stronger opinions on social media, compared to actual truth3 or hold dierent public
and private opinions25.
We make a random directed network, the most common network structure used to build synthetic social
networks26. e use of directed networks enables the representation of asymmetric relationships—individual
A may aect individual B but the reverse may not be true (in online social networks this might correspond to
B following A without reciprocation, resulting in A inuencing B, but B not inuencing A). e network com-
prises
k
average links per node; the entire network is considered, including any subnetworks following Condie
and Condie2.
An individual
i
in the network at timestep
t=0
has an initial opinion
Oi,t=0
.
e opinion of an individual
i
at timestep
t>0
is dened as:
where
Oi,t1
is individual
i
’s opinion in the previous timestep,
Ii,t
is the set of individuals connected to individual
i
, whose expressed opinion is within the condence threshold
ε
, as per Hegselmann and Krause20:
An individual
i
’s expressed opinion is calculated as follows:
(3)
O
i,t=
Oi,t1+
jI
i
,
t
SOj,t
/
1+
Ii,t
(4)
Ii
,
t
={j|
O
i
,
t
1SO
j
,
t
1
ε
}
Time
Opinions
Time
Opinions
Time
Opinions
convergence fragmentation
divergence
Figure1. Assimilative inuence (le), bounded assimilative inuence (middle) and dierentiative inuence
(right)2.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
4
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
where
Ei,t
is whether individual
i
will amplify its opinion at timestep
t
, and
σi,t
is the individual’s amplied
amount at timestep
t
.
Table1 provides the denitions for SOAM variables at the individual level and Table2 provides the denitions
for SOAM variables at the system level. Finally, Table3 compares the main features of SOAM with the features
of similar models in the literature to illustrate the similarities and dierences and justify our design decisions.
Results
Given the hypothesis that amplied individual opinions can cause polarization, we study the eect of ampli-
cation on opinion dynamics under dierent condence thresholds. We compare the model baseline without
amplication with the model using amplication. In more detail:
1. No amplication, for condence thresholds
ε
= 0.2 and 0.8. ese condence threshold settings follow the
range of settings that was explored in Condie and Condie2. Condence threshold determines the range in
(5)
SO
i,t=
Oi,t1σi,t,if Oi,t1<0
Oi,t1+σi,t,if Oi,t10if Ei,t=
True
O
i
,
t
1otherwise
Table 1. SOAM individual level variables and values.
Var i able Denition Value range
Oi,t=0
Initialized opinion of individual
i
at timestep 0, which is a random number between value range. A value of
−1.0 means a very negative opinion on the topic, and a value of 1 means a very positive opinion on the topic.
Note that for
t
> 0, there is potential for opinions to go beyond the original opinion range [−1.0, 1.0]
Oi,t>0
Opinion of individual
i
at timestep
t
> 0 [−∞, ∞]
Ei
Whether or not the individual
i
is an amplier True/False
Ei,t
Whether or not the individual
i
amplies their opinion in timestep
t
, which is True if
Ei=True and Pi,tPe,
where
Pi,t
is a generated probability for that individual
i
at that timestep
t
True/False
σi,t
Amplied amount for individual
i
at timestep
t
, a random number between 0 and
s
, where
s
is the strength of
amplication, see Table2[0,
s
]
Table 2. SOAM system level variables and values.
Var i able Denition Value range
t
Timesteps [0, ∞]
n
Number of nodes [0, ∞]
k
Average links per node [0, n]
ε
Condence threshold [0, 1]
π
Proportion of the population who are ampliers [0, 1]
p
Probability of ampliers amplifying opinions [0, 1]
s
Strength of amplication [0, 1]
Table 3. SOAM features compared to literature. We only compare with models that SOAM is based on.
Model Social network How opinion is expressed Opinion update Inuence
HK bounded condence model20 Fully connected As is. What the individual thinks
and what the individual expresses
are the same
Inuence occurs only if opinions
are not too far from each other
(within condence threshold
ε
)
Each individual inuences all the
other individuals
Social Inuence and Event Model2Random network with average
link per individual of
k
= 5. A new
network is formed every timestep
As is. What the individual thinks
and what the individual expresses
are the same
An individual will only change
their opinion if their certainty is
less than the average certainty of
other individuals with which they
interact at timestep
t
An individual is inuenced by an
event if the event strength is not
too far from the individual’s opin-
ion (within
ε
) and the individual’s
condence is less than or equal to
the event strength
Inuence occurs between linked
individuals
SOAM Random network with
k
= 2, 5 or
10 average links per individual
Some individuals provide
expressed opinions that are
amplied versions of their actual
opinions
Inuence occurs only if opinions
are not too far from each other
(within
ε
)
Inuence occurs between linked
individuals
Content courtesy of Springer Nature, terms of use apply. Rights reserved
5
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
which an individual will re-adjust their opinion, i.e., an individual will re-adjust their opinion based on other
individual’s opinions if the dierence of opinion is smaller in magnitude than the condence threshold, so
a low condence threshold means that the individuals are less likely to re-adjust their opinions.
2. Amplication, where the proportion of the population who are ampliers
π
= 0.2 (low proportion of ampli-
ers) and 0.5 (high proportion of ampliers), amplication probability
p
= 0.5, amplication strength
s
=
0.5, for condence thresholds
ε
= 0.2 and 0.8.
We ran the model for number of timesteps
t
= 400, number of nodes
n
= 100, and average links per node
k
= 5, and plotted each individual’s opinion over time. Results were invariant to the number of nodes for tests up
to
n
= 1000. For clarity our plot scale ranges from −2.0 to 2.0 (doubling the initial opinion range), in order to
show extreme polarization if it exists. We plot the actual opinions and not the expressed opinions in this work
as they indicate the degree to which population opinions are truly modied—while we may all exaggerate at
times, our actions are determined by our true beliefs. Our results show that when there are no ampliers, we see
the usual convergence and fragmented convergence, and the opinion range is always within the initial range. In
other words, assimilative inuence (Fig.1(le)) and bounded assimilative inuence (Fig.1(middle)) occurred in
Fig.2a,b respectively. When there is amplication (
π
= 0.2 and 0.5), we observe that extreme polarization illus-
trated in Fig.1(right) occurs, see Fig.2c–f. When
ε
= 0.8, extreme polarization tends to occur in a single direction
(e.g., Fig.2c shows convergence to extreme negative sentiment and Fig.2e shows convergence to extreme posi-
tive sentiment), while
ε
= 0.2 results in multiple convergences of clusters, some extreme some not, see Fig.2e,f.
When
ε
= 0.2 and proportion of ampliers is 50% (
π
= 0.5), extreme polarization occurs in both positive and
negative sentiments Fig.2f. Note that the polarization more than doubles the initial opinion sentiment values,
showing extreme sentiments beyond 2.0 or lower than −2.0 (our Fig.2 opinion plot scale is from −2.0 to 2.0).
SOAM illustrates that when individuals amplify their opinion, extreme polarization occurs, and when the
proportion of ampliers are higher, extreme polarization occurs quicker and at a bigger magnitude. Although
our work here focuses on directed networks, the same ndings are evident for undirected networks. To under-
stand in more detail how variations in the parameters relating to amplication can aect the development of the
distribution of population opinion across time, we ran SOAM under a range of dierent variables and settings:
(a) Baseline: Condence threshold no amplication. How does dierent condence thresholds,
ε
, aect conict
when there is no amplication?
k
= 5, for condence thresholds
ε
= 0.2, 0.5 and 0.8.
(b) Links per node no amplication. How does the dierent average number of links per node,
k
, aect conict
when there is no amplication?
k
= 2, 5 and 10, for condence threshold
ε
=0.8, comparing with Baseline.
(c) Condence threshold with amplication. How do dierent condence thresholds,
ε
, aect conict when
there is a low proportion of ampliers?
π
= 0.2,
p
= 0.5,
s
= 0.2,
k
= 5, for condence thresholds
ε
= 0.2, 0.5
and 0.8, comparing with Baseline.
(d) Strength of amplication. How does strength of amplication,
s
, aect the results?
k
= 5,
ε
= 0.2,
π
= 0.2,
p
= 0.5,
s
= 0.2, 0.5 and 0.8, comparing with Baseline.
(e) Proportion of ampliers. How does proportion of ampliers,
π
, aect the results?
k
= 5,
ε
= 0.2,
p
= 0.5,
s
=
0.2,
π
= 0.2, 0.5 and 0.8, comparing with Baseline.
(f) Probability of amplication. How does probability of ampliers amplifying opinions,
p
, aect the results?
k
= 5,
ε
= 0.2,
π
= 0.2,
s
= 0.2,
p
= 0.2, 0.5 and 0.8, comparing with Baseline.
Similar to the previous experiment, we ran the model for number of timesteps
t
= 400, and for the number
of nodes
n
= 100. We measure and plot population conict over time, dened by Condie and Condie2 as the
standard deviation of population opinion
Ot=SD(Oi,t)
, where
Oi,t
is the individual’s opinion at timestep
t
.
Conict is a useful measure when we wish to understand the diversity of opinions in the population, with higher
conict indicative of a broader range (which may be the result of polarization).
Our results show that conict levels decline rapidly when condence threshold
ε
is high, see Fig.3a when
ε
= 0.8, conict reached a level of zero very early on in the run, which is also consistent with the results in Fig.2a.
And vice versa, low condence threshold
ε
= 0.2 results in highest conict among the three thresholds, although
without amplication, the level of conict is only slightly above the random opinion distribution at
t
= 0. Reduc-
ing the average number of links per node increases conict, Fig.3b shows that with a very low average number
of links per node of
k
= 2, conict remains steady at around 0.3, while
k
= 5 and 10, conict reduces to zero. We
can see that in Fig.3c, lower condence threshold results in higher conict, this similar to Fig.3a where there
is no amplication. When there is amplication, low and medium levels of condence threshold
ε
= 0.2 and
0.5, results in conict levels that continue to increase as time progresses, reaching approximately 1.0for
ε
= 0.5
and 1.25 for
ε
= 0.2. InFig.3d,
s
= 0.2 is the only setting where conict increases over time, while
s
= 0.5 and
0.8 maintains conict as the same level as the start, this is because a high strength of amplication may make
the opinion so extreme that others can no longer relate to it (it falls outside the condence threshold) and so
no longer inuences others. Increasing the proportion of ampliers increases conict, with
π
= 0.8 more than
double the original conict level at
t
= 150 (Fig.3e). Increasing the probability of amplication increases conict,
Fig.3f shows that at
p
= 0.5 and 0.8, conict reaches more than 1.0.
Countering extreme polarization
SOAM suggests that opinion amplication, even with low occurrence, and especially with low amplication can
cause extreme polarization in the population. Given that such polarization is also evident in real world social
networks, in this section we examine potential methods to counter, prevent, or even reverse extreme polarization.
Recent research on polarisation mitigation and opinion control suggests various approaches, for example, Musco
Content courtesy of Springer Nature, terms of use apply. Rights reserved
6
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Figure2. Opinions of individuals starting from a random distribution [−1.0, 1.0] under a range of conditions.
Red dots denote individuals who are amplifying in that timestep. Grey areas indicate opinions outside the initial
opinion range. Y-axis shown from −2.0 to 2.0 (double the initial opinion range) for clarity; in (c) to (f), opinions
exceed this range and become even more extreme.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
7
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Figure3. Conict (
Ot
) tracked over 400 timesteps, starting from a uniform random distribution of opinions
(
Ot
=
1
/
3
=0.577
), indicated with a grey dotted line in each chart. Shown are dependencies on: (a)
condence threshold when there were no amplications; (b) average number of connections per individual
per timestep when there were no amplications; (c) condence threshold when there were amplications; (d)
strength of amplication; (e) proportion of ampliers; and (f) probability of amplication.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
8
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
etal. studied ways to minimise polarisation and disagreement in social networks27, Garimella etal. suggests
controversy can be reduced by connecting opposing views28, Rossi etal. studied closed loops between opinion
formation and personalised recommendations29, while Matakos etal. proposed a recommender-based approach
to break lter bubbles in social media30. Here we examine two techniques in use today by social networks to see
their eectiveness in our model.
Countering method 1: the ve‑strike system. A common strategy used by online social networks is
to stop users from posting aer a number of oenses that disobey their rules. (For example, Twitter’s medical
misinformation policy (https:// help. twitt er. com/ en/ rules- and- polic ies/ medic al- misin forma tion- policy) has a
strike system, each violation of the policy count as a strike, and 5 or more strikes results in a permanent suspen-
sion of the Twitter account.) We implement this policy by detecting amplied posts and if a user amplies more
than 5 times, they are no longer allowed to post further. To study the eect, we add a max amplify parameter, so
that each agent can only amplify 5 times. Once they exceed this number, they are removed. In order to keep the
population constant, we replace a removed individual with a new individual with the same default probabilities
and random opinion.
We ran the model with the same amplication settings in the previous section, timesteps
t
= 400, number
of nodes
n
= 100, and average links per node
k
= 5, with the proportion of the population who are ampliers
π
= 0.5, amplication probability
p
= 0.5, amplication strength
s
= 0.5, for condence thresholds
ε
= 0.2 and 0.8.
We compare a normal run of SOAM with a run that uses the maximum amplify intervention, both starting from
the same random seed to ensure the composition of the initial populations are identical.
While extreme polarization occurred in Fig.4a, aer applying the curbing method, Fig.4b shows that extreme
polarization no longer occurs. e same results can be seen for Fig.4c versus Fig.4d.
Figure4. Opinions of individuals starting from a random distribution [−1.0, 1.0] with corresponding conict
plots. Red dots denote individuals who are amplifying in that timestep. Grey areas indicate opinions outside the
initial opinion range.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
9
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Countering method 2: disseminating balanced opinions. A second approach to countering extreme
opinions on social networks is for institutions to disseminate balanced opinions about any given topic. We model
this by introducing an additional 5 random external opinions at every other timestep, which are randomly
spread across the initial range [−1.0, 1.0], representing a normal range of non-polarized opinions. Every indi-
vidual has access to this same set of opinions, every other timestep, and the same condence threshold aspect
applies, where people are only inuenced by opinions that are within the condence threshold. is simulates
institutions exercising correctional behaviors of countering misinformation by publicly disseminating less polar-
ized opinions across the original range of “normal” opinions. For example, the Science Media Centre (https://
www. scien cemed iacen tre. org/ about- us/) explicitly aims to disseminate “accurate and evidence-based informa-
tion about science and engineering through the media, particularly on controversial and headline news stories
when most confusion and misinformation occurs”. We ran the model with the same settings as before, timesteps
t
= 400, number of nodes
n
= 100, and average links per node
k
= 5, with the proportion of the population who
are ampliers
π
= 0.5, amplication probability
p
= 0.5, amplication strength
s
= 0.5, for condence thresholds
ε
= 0.2 and 0.8. Again, we compare a normal run of SOAM with a run that uses the intervention, both starting
from the same random seed to ensure the composition of the initial populations are identical.
While extreme polarization occurred in Fig.5a,b shows that extreme polarization no longer happens, although
some opinions are slightly outside the original. e same results can be seen for Fig.5c versus Fig.5d. is shows
that with a small but consistent intervention, extreme polarization can be curbed.
Figure5. Opinions of individuals starting from a random distribution [−1.0, 1.0] with corresponding conict
plots. Red dots denote individuals who are amplifying in that timestep. Green dots denote external unpolarised
opinions.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
10
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Discussion
Research has previously shown that extreme polarization can be caused by strong external events impacting
populations2. Our model suggests another factor: that extreme polarization can be caused by individuals simply
amplifying their own opinions. We demonstrate for the rst time that this simple idea results in extreme polariza-
tion. SOAM shows us that some common trends in recent communication amongst a minority can aect entire
populations. Whether spin, sensationalism, clickbaits, hype, sentiment polarity, or even “fake news”, when opin-
ions are amplied by a few, extreme polarization of many can result. is nding is consistent despite network
models (the connections between individuals). In our experiments, we used a random network (specically the
Erdös-Rényi model), as it is most commonly used in existing literature to model social networks26. When we
use SOAM with other network models that represent plausible network structures for online social networks:
scale-free network31 and Barabasi-Albert network32, we nd consistent results: extreme polarization occurred
with all of them. Likewise, when we increase the number of connections k (i.e., to model online networks where
people have numerous connections) we observe the same result.
Extreme polarization can cause several harmful eects. We explored two methods to address polarization
caused by opinion amplication: preventing individuals from amplifying more than ve times, and ensuring a
consistent communication of opinions with sentiments that fall in a normal range. Both approaches showed that
polarization can be curbed. is result is consistent with real world ndings. For example, one study showed that
Fox News viewers who watched CNN instead for 30days became more skeptical of biased coverage33.
We reran the ve-strike curbing method on the other types of networks and found that it curbed extreme
polarization equally eectively. When we reran the disseminating unpolarized opinion method, we found that
the Barabasi-Albert network32 requires either more frequent dissemination (e.g., daily) or higher number of mes-
sages (e.g., instead of 5 messages, 10, or 20), and sometimes both depending on the runs, to be eective—more
connected individuals receive more extreme opinions and therefore may need stronger normal-range messaging
to counter this.
It is always tempting for us to speak louder to be heard in a crowd. But when some begin to shout, others feel
they must also shout. And when everyone attempts to out-shout everyone else, the result can be a screaming
mob, all vocalizing at the top of their lungs. In a social network, the volume of sentiment can become amplied
in the same way, which can result in groups with extreme polarized opinions. But this form of polarization is
participatory and voluntary. If we choose to temper our expressed opinions, if we lower our voices and speak
normally instead of screaming, then perhaps we might help provide that much-needed balance of normal senti-
ment to society, helping curb extremism for all.
Data availability
Data and code are available at http:// www. cs. ucl. ac. uk/ sta/S. Lim/ polar izati on.
Received: 8 June 2022; Accepted: 20 October 2022
References
1. Tucker, J. A. et al. Social media, political polarization, and political disinformation: A review of the scientic literature (March 19,
2018). SSRN. https:// ssrn. com/ abstr act= 31441 39. (2018).
2. Condie, S. A. & Condie, C. M. Stochastic events can explain sustained clustering and polarisation of opinions in social networks.
Sci. Rep. 11, 1355 (2021).
3. Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W. & Starnini, M. e echo chamber eect on social media. Proc. Natl.
Acad. Sci. U.S.A. 118, e2023301118 (2021).
4. Peck, A. A problem of amplication: Folklore and fake news in the age of social media. J. Am. Folk. 133, 329–351 (2020).
5. Patro, J. et al. Characterizing the spread of exaggerated news content over social media. arXiv: 1811. 07853 (2018).
6. Yavchitz, A. et al. Misrepresentation of randomized controlled trials in press releases and news coverage: A cohort study. PLoS
Med. 9, e1001308 (2012).
7. Molek-Kozakowska, K. Towards a pragma-linguistic framework for the study of sensationalism in news headlines. Discourse
Comm. 7, 173–197 (2013).
8. Chakraborty, A., Paranjape, B., Kakarla, S. & Ganguly, N. Stop clickbait: detecting and preventing clickbaits in online news media.
In Proceedings of International Conference on Advanced Social Network Analysis Mining 9–16 (2016).
9. Deng, X. & Chen, R. Sentiment analysis based online restaurants fake reviews hype detection. Web Technologies and Applications:
APWeb 2014. Lecture Notes in Computer Science 8710 (2014).
10. Preskey, N. Inuencers warned not to use lters to exaggerate eects of beauty products they’re promoting, https:// www. indep endent.
co. uk/ life- style/ insta gram- beauty- produ cts- spon- advert- rules- b1796 894. html (2021).
11. MacIsaac, S., Kelly, J. & Gray, S. ‘She has like 4000 followers!’: e celebrication of self within school social networks. J. You th
Stud. 21, 816–835 (2018).
12. Adams, R. C. et al. How readers understand causal and correlational expressions used in news headlines. J. Exp. Psychol. Appl. 23,
1–14 (2017).
13. Devitt, A. & Ahmad, K. Sentiment polarity identication in nancial news: A cohesion-based approach. In Proceedings of the 45th
Annual Meeting of the Association of Computational Linguistics 984–991 (2007).
14. Zhang, X. & Ghorbani, A. A. An overview of online fake news: Characterization, detection, and discussion. Inf. Process. Mgmt.
57, 102025 (2020).
15. van Prooijen, J. W. & Douglas, K. M. Belief in conspiracy theories: Basic principles of an emerging research domain. Eur. J. Soc.
Psychol. 48, 897–908 (2018).
16. French, J. R. Jr. A formal theory of social power. Psychol. Rev. 63, 181 (1956).
17. DeGroot, M. H. Reaching a consensus. J. Am. Stat. Assoc. 69, 118–121 (1974).
18. Lehrer, K. & Wagner, C. Rational Consensus in Science and Society: A Philosophical and Mathematical Study 165 (Springer, 2012).
19. Deuant, G., Neau, D., Amblard, F. & Weisbuch, G. Mixing beliefs among interacting agents. Adv. Complex Syst. 3, 11 (2001).
20. Hegselmann, R. & Krause, U. Opinion dynamics and bounded condence models, analysis, and simulation. J. Artif. Soc. Soc. Simul.
5, 33 (2002).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
11
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
21. Fu, G., Zhang, W. & Li, Z. Opinion dynamics of modied Hegselmann–Krause model in a group-based population with hetero-
geneous bounded condence. Phys. A Stat. Mech. Appl. 419, 558–565 (2015).
22. Cheng, C. & Yu, C. Opinion dynamics with bounded condence and group pressure. Phys. A Stat. Mech. Appl. 532, 121900 (2019).
23. Macy, M. W., Ma, M., Tabin, D. R., Gao, J. & Szymanski, B. K. Polarization and tipping points. Proc. Natl. Acad. Sci. U.S.A. 118,
e2102144118 (2021).
24. McDevitt, M., Kiousis, S. & Wahl-Jorgensen, K. Spiral of moderation: Opinion expression in computer-mediated discussion. Int.
J. Public Opin. Res. 15, 454–470 (2003).
25. Jarema, M. & Sznajd-Weron, K. Private and public opinions in a model based on the total dissonance function: A simulation study.
Computational Science—ICCS 2022. Lecture Notes in Computer Science 146–153 (2022).
26. Amblard, F., Bouadjio-Boulic, A., Gutiérrez, C. S. & Gaudou, B. Which models are used in social simulation to generate social
networks? A review of 17 years of publications in JASSS. In Proceedings of 2015 Winter Simulation Conference 4021–4032 (2015).
27. Musco, C., Musco, C. & Tsourakakis, C. E. Minimizing polarization and disagreement in social networks. In Proceedings of the
World Wide Web Conference 369–378.
28. Garimella, K., De Francisci Morales, G., Gionis, A. & Mathioudakis, M. Reducing controversy by connecting opposing views. In
Proceedings of the 10th annual ACM International Conference on Web Search and Data Mining 81–90.
29. Rossi, W. S., Polderman, J. W. & Frasca, P. e closed loop between opinion formation and personalised recommendations. IEEE
Trans. Control Netw. Syst. 9, 1092–1103 (2021).
30. Matakos, A., Tu, S. & Gionis, A. Tell me something my friends do not know: Diversity maximization in social networks. Knowl.
Inf. Syst. 62, 3697–3726 (2020).
31. Bollobás, B., Borgs, C., Chayes, J. T. & Riordan, O. Directed scale-free graphs. In Proceedings of 14th Annual ACM-SIAM Symposium
on Discrete Algorithms 132–139 (2003).
32. Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
33. Broockman, D. & Kalla, J. e manifold eects of partisan media on viewers’ beliefs and attitudes: A eld experiment with Fox
News viewers. OSF Preprints (2022).
Author contributions
Both authors designed the study, conceptualized the model, analyzed the results and wrote the manuscript. S.L.L
also coded and ran the model.
Competing interests
e authors declare no competing interests.
Additional information
Correspondence and requests for materials should be addressed to S.L.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
Open Access is article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. e images or other third party material in this
article are included in the articles Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
© e Author(s) 2022, corrected publication 2022
Content courtesy of Springer Nature, terms of use apply. Rights reserved
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... The results indicate that the perceptions have far-reaching consequences by translating to offline behaviors, creating divisive factions split on partisan views. In particular, recent research highlights the role of opinion amplification in causing extreme polarization on social networks (Lim & Bentley, 2022), which indicates a looming possibility of falsehood and polarization amplifying each other (Cinelli et al., 2021a, b;Das et al., 2023;Vicario et al., 2019) with the need to tackle falsehood becoming crucial towards breaking this vicious cycle (Das et al., 2023). In this regard, the current research prompts IS researchers to view disinformation as a nuanced phenomenon constituting several variants and sheds light on the differing effects of variants on polarization. ...
Article
Full-text available
Information and communication technologies hold immense potential to enhance our lives and societal well-being. However, digital spaces have also emerged as a fertile ground for fake news campaigns and hate speech, aggravating polarization and posing a threat to societal harmony. Despite the fact that this dark side is acknowledged in the literature, the complexity of polarization as a phenomenon coupled with the socio-technical nature of fake news necessitates a novel approach to unravel its intricacies. In light of this sophistication, the current study employs complexity theory and a configurational approach to investigate the impact of diverse disinformation campaigns and hate speech in polarizing societies across 177 countries through a cross-country investigation. The results demonstrate the definitive role of disinformation and hate speech in polarizing societies. The findings also offer a balanced perspective on internet censorship and social media monitoring as necessary evils to combat the disinformation menace and control polarization, but suggest that such efforts may lend support to a milieu of hate speech that fuels polarization. Implications for theory and practice are discussed.
... Here is where the news and social media outlets come into play. Media personalities can add fuel to the amplification of extreme views, thereby causing polarization (Lim, 2022). ...
Article
Full-text available
Today's political world is highly polarized with a great deal of animosity on each side for the opposing side. This polarization is amplified by media that cater to partisan positions on issues, creating a narrative that people seek out information sources that bolster their initial positions and then emerge more convinced that their side is right and the opposing side is wrong. The present study investigates people's openness to examining information on both sides of an issue and whether such examination can move people's attitudes. 19 high school students were given a questionnaire that asked them about their attitudes on guns. The questionnaire contained purely value-related questions like whether guns are good or bad and policy-related questions like whether teachers should carry guns in the classroom. Participants were then given access to information pieces that were labeled as to their content and which side of the gun debate they advocated. Participants were allowed to view as many or as few of the pieces as they chose. After viewing the information, participants were given the questionnaire again to see if any changes occurred in their attitudes towards guns. Results showed no correlation between initial opinion on guns and whether pro or anti-gun information was looked at. Rather, there was a strong correlation between the number of pro-gun and anti-gun information pieces viewed, suggesting that people differed in the amount of information they sought rather than the type. In absolute terms, Participants' attitudes changed on only one question, with Participants becoming more likely to believe teachers should carry guns in classrooms. Related to this, there were three questions for which Participants' change in attitude scores correlated negatively with their initial scores: whether teachers should carry guns in the classroom, whether there should be stricter gun laws and whether assault weapons should be banned. These findings suggest that while
Article
Full-text available
Significance We explore the key differences between the main social media platforms and how they are likely to influence information spreading and the formation of echo chambers. To assess the different dynamics, we perform a comparative analysis on more than 100 million pieces of content concerning controversial topics (e.g., gun control, vaccination, abortion) from Gab, Facebook, Reddit, and Twitter. The analysis focuses on two main dimensions: 1) homophily in the interaction networks and 2) bias in the information diffusion toward like-minded peers. Our results show that the aggregation in homophilic clusters of users dominates online dynamics. However, a direct comparison of news consumption on Facebook and Reddit shows higher segregation on Facebook.
Article
Full-text available
Understanding the processes underlying development and persistence of polarised opinions has been one of the key challenges in social networks for more than two decades. While plausible mechanisms have been suggested, they assume quite specialised interactions between individuals or groups that may only be relevant in particular contexts. We propose that a more broadly relevant explanation might be associated with the influence of external events. An agent-based bounded-confidence model has been used to demonstrate persistent polarisation of opinions within populations exposed to stochastic events (of positive and negative influence) even when all interactions between individuals are noisy and assimilative. Events can have a large impact on the distribution of opinions because their influence acts synchronistically across a large proportion of the population, whereas an individual can only interact with small numbers of other individuals at any particular time.
Article
Full-text available
Social media have a great potential to improve information dissemination in our society, yet they have been held accountable for a number of undesirable effects, such as polarization and filter bubbles. It is thus important to understand these negative phenomena and develop methods to combat them. In this paper, we propose a novel approach to address the problem of breaking filter bubbles in social media. We do so by aiming to maximize the diversity of the information exposed to connected social-media users. We formulate the problem of maximizing the diversity of exposure as a quadratic-knapsack problem. We show that the proposed diversity-maximization problem is inapproximable, and thus, we resort to polynomial nonapproximable algorithms, inspired by solutions developed for the quadratic-knapsack problem, as well as scalable greedy heuristics. We complement our algorithms with instance-specific upper bounds, which are used to provide empirical approximation guarantees for the given problem instances. Our experimental evaluation shows that a proposed greedy algorithm followed by randomized local search is the algorithm of choice given its quality-vs.-efficiency trade-off.
Chapter
We study an agent-based model of opinion dynamics in which an agent’s private opinion may differ significantly from that expressed publicly. The model is based on the so-called total dissonance function. The behavior of the system depends on the competition between the latter and social temperature. We focus on a special case of parental and peer influence on adolescents. In such a case, as the temperature rises, Monte Carlo simulations reveal a sharp transition between a state with and a state without private-public opinion discrepancy. This may have far-reaching consequences for developing marketing strategies.KeywordsAgent-based modelOpinion dynamicsPrivate and public opinionExpressed opinionDissonanceMarketing
Article
Significance Our study was motivated by a highly disturbing puzzle. Confronted with a deadly global pandemic that threatened not only massive loss of life but also the collapse of our medical system and economy, why were we unable to put partisan divisions aside and unite in a common cause, similar to the national mobilization in the Great Depression and the Second World War? We used a computational model to search for an answer in the phase transitions of political polarization. The model reveals asymmetric hysteresis trajectories with tipping points that are hard to predict and that make polarization extremely difficult to reverse once the level exceeds a critical value.
Article
In online platforms, recommender systems are responsible for directing users to relevant content. In order to enhance the users engagement, recommender systems adapt their output to the reactions of the users, who are in turn affected by the recommended content. In this work, we study a tractable analytical model of a user that interacts with an online news aggregator, with the purpose of making explicit the feedback loop between the evolution of the users opinion and the personalised recommendation of content. More specifically, we assume that the user is endowed with a scalar opinion about a certain issue and seeks news about it on a news aggregator: this opinion is influenced by all received news, which are characterized by a binary position on the issue at hand. The user is affected by a confirmation bias, that is, a preference for news that confirm her current opinion. The news aggregator recommends items with the goal of maximizing the number of users clicks (as a measure of her engagement): in order to fulfil its goal, the recommender has to compromise between exploring the users preferences and exploiting what it has learned so far. After defining sui
Article
This commentary on the 2018 special issue of the Journal of American Folklore, “Fake News: Definitions and Approaches,” argues that digital networks have enabled fake news by amplification. Fake news by amplification occurs when small-scale events become amplified through the convergent actions of everyday users, mass media gatekeepers, and social media algorithms. Events that are amplified risk becoming distorted as they circulate, with users supplying their own context and interpretations. The resulting fake news is difficult to counter because it goes beyond questions of fact and enters the realm of interpretation, enabled by widespread networked belief.
Conference Paper
In this paper, we consider a dataset comprising press releases about health research from different universities in the UK along with a corresponding set of news articles. We find that tweets sharing exaggerated news have a significantly different linguistic structure from those that share non-exaggerated news.
Article
Current opinion formation models typically assume that two individuals (agents) will communicate with each other only if the distance between their opinions is less than a threshold called the bound of confidence. However, in many social situations, an individual’s opinion formation and expression may be different because the individual feels pressured to express an opinion similar to the public opinion in the group. To model this situation, we propose a bounded confidence plus group pressure model, in which each individual forms an inner opinion relative to the bound of confidence and expresses an opinion, taking group pressure into consideration. We theoretically demonstrate that a group with all individuals facing group pressure can reach a consensus in finite time. We further consider the situation of a mixed group with both pressured and non-pressured individuals, to study how the group pressure level and the group size can affect opinion dynamics. We find that the consensus threshold εc is significantly reduced in the modified model, and that group pressure does not always help to promote consensus in a mixed group; instead, consensus is related to the confidence bound.
Article
Over the recent years, the growth of online social media has greatly facilitated the way people communicate with each other. Users of online social media share information, connect with other people and stay informed about trending events. However, much recent information appearing on social media is dubious and, in some cases, intended to mislead. Such content is often called fake news. Large amounts of online fake news has the potential to cause serious problems in society. Many point to the 2016 U.S. presidential election campaign as having been influenced by fake news. Subsequent to this election, the term has entered the mainstream vernacular. Moreover it has drawn the attention of industry and academia, seeking to understand its origins, distribution and effects. Of critical interest is the ability to detect when online content is untrue and intended to mislead. This is technically challenging for several reasons. Using social media tools, content is easily generated and quickly spread, leading to a large volume of content to analyse. Online information is very diverse, covering a large number of subjects, which contributes complexity to this task. The truth and intent of any statement often cannot be assessed by computers alone, so efforts must depend on collaboration between humans and technology. For instance, some content that is deemed by experts of being false and intended to mislead are available. While these sources are in limited supply, they can form a basis for such a shared effort. In this survey, we present a comprehensive overview of the finding to date relating to fake news. We characterize the negative impact of online fake news, and the state-of-the-art in detection methods. Many of these rely on identifying features of the users, content, and context that indicate misinformation. We also study existing datasets that have been used for classifying fake news. Finally, we propose promising research directions for online fake news analysis.