- Access to this full-text is provided by Springer Nature.
- Learn more
Download available
Content available from Scientific Reports
This content is subject to copyright. Terms and conditions apply.
1
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports
Opinion amplication causes
extreme polarization in social
networks
Soo Ling Lim
1* & Peter J. Bentley
1,2
Extreme polarization of opinions fuels many of the problems facing our societies today, from issues
on human rights to the environment. Social media provides the vehicle for these opinions and
enables the spread of ideas faster than ever before. Previous computational models have suggested
that signicant external events can induce extreme polarization. We introduce the Social Opinion
Amplication Model (SOAM) to investigate an alternative hypothesis: that opinion amplication can
result in extreme polarization. SOAM models eects such as sensationalism, hype, or “fake news” as
people express amplied versions of their actual opinions, motivated by the desire to gain a greater
following. We show for the rst time that this simple idea results in extreme polarization, especially
when the degree of amplication is small. We further show that such extreme polarization can be
prevented by two methods: preventing individuals from amplifying more than ve times, or through
consistent dissemination of balanced opinions to the population. It is natural to try and have the
loudest voice in a crowd when we seek attention; this work suggests that instead of shouting to be
heard and generating an uproar, it is better for all if we speak with moderation.
Polarization of opinions on social networks is increasingly evident today, with highly contrasting beliefs being
shared in politics, environmental and social issues. e likely repercussions of polarization in our societies are
well established, from damaging the democratic process to a decrease in tolerance for others1. With so much
at stake, the use of computational models to understand the causes of polarization is receiving more attention.
When using models to study social inuences, it is common for opinions to converge towards a consensus,
or fragment into two or more clusters2. In both cases, nal opinions fall within the initial range of opinions. Yet
social media breeds not just polarized opinions, but extreme opinions, that might otherwise be considered outliers
in population norms. us in our models, a key outcome to study is when individuals inuence each other such
that opinions diverge towards extremes that are outside their initial range of opinions.
Synchronized external events have been shown to be a possible cause of polarization2. However, it is possible
that polarization happens gradually in the network without needing external intervention. In social media, it
is common for people to amplify what they actually feel about something, in order to attract attention, because
the more extreme a post, the more popular it is1,3. We use the term opinion amplication to encompass the range
of behaviors by users that may distort the original opinion with a more positive or negative sentiment. Such
behaviors include making unfounded assumptions, making generalizations or summaries, selectively quoting,
editorializing, or misunderstanding3.
Opinion amplication may happen at a low level at all times, but it proliferates across a network once a topic
is trending4. Here we look at opinion amplication as potential cause for polarization, focusing specically on
extreme polarization. is diers fundamentally from previous work as previous models investigated polarization
that results from interactions between individuals, and more recently events that are external to the individu-
als themselves; in this work we argue that noise fails to model this crucial phenomenon. We hypothesize that
opinions can become biased towards a more positive or negative sentiment through opinion amplication. We
propose the Social Opinion Amplication Model (SOAM) to investigate these ideas. We remove well-studied
variables such as noise from the model in order to identify the minimum features needed to create extreme
polarization in a population through opinion amplication.
OPEN
1Department of Computer Science, University College London, London, UK. 2Autodesk Research, London,
UK. *email: s.lim@cs.ucl.ac.uk
Content courtesy of Springer Nature, terms of use apply. Rights reserved
2
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Background
e inexorable draw of expressing extreme sentiments may rst have emerged in conventional media. It is well
known that health research claims become exaggerated in press releases and news articles, with more than 50%
of the press releases from certain universities exaggerated, and some news agencies exaggerating about 60% of
the articles they publish5. is is “spin”, dened as specic reporting strategies, intentional or unintentional, that
can emphasize the benecial eect of the experimental treatment6. “Sensationalism” is a close bedfellow when
reporting general topics—a discourse strategy of “packaging” information in news headlines in such a way that
news items are presented as more interesting, extraordinary and relevant7. ese practises became ever more
prevalent as the competition for online customers increased, becoming rened into new genres such as “click-
baits”—nothing but amplied headlines designed to lure the readers to click on the link8, and hype in online
reviews, where the hyped review is always absolute positive or negative9.
As conventional media has transitioned to social media, today everyone is a “media outlet”, so the lure of
attention-seeking behavior is now felt by individuals. Inuencers have used beauty lters to make the products
they are advertising appear more eective, resulting in warnings by Advertising Standards Agency (ASA)10. Young
people aged 11–18 were observed to exaggerate their behaviors as they aimed to live up to amplied claims about
popularity11. Even the accidental use of certain words can make readers believe causal relationships that may not
exist12. is is also known as sentiment polarity, an important feature in fake news—in order to make their news
persuasive, authors oen express strong positive or negative feeling in the content13,14. e result is that bizarre
conspiracy theories that might once have been the domain of a tiny minority are now routinely given the same
credence as evidence-backed science by large portions of the population15.
It is infeasible to perform experiments on real human populations in order to understand causation of
extreme polarization. Computational models provide an essential tool to overcome this empirical limitation.
Computational models have been used for decades to study opinion dynamics, with early works oen focused on
consensus formation16–18. Deuant etal. developed a model of opinion dynamics where convergence of opinions
into one average opinion and convergence into multiple opinion clusters are observed19. eir model consists of
a population of
N
agents
i
with continuous opinions
xi
. At each timestep, two randomly chosen agents “meet”
and they re-adjust their opinion when their dierence of opinion is smaller in magnitude than a threshold
ε
.
Suppose that the two agents have opinion
x
and
x′
and that
x−x
′
<ε
, opinions are then adjusted according to:
where
µ
is the convergence parameter taken between 0 and 0.5 during the simulations. Deuant etal. found that
the value of
ε
is the main inuencer on the dynamics of the model, when it is high, convergence into one opinion
occurs, and when it is low, polarization/fragmentation occurs (convergence into multiple opinions)19.
µ
and
N
only inuence convergence time and the distribution of nal opinions. ey applied their model to a social net-
work of agents, whereby any agent in the model can only interact with 4 connected neighbors on a grid (so that
the random selection of agents to interact can only come from connected neighbors) and found the same results.
Hegselmann and Krause developed a model with bounded condence to investigate opinion fragmentation
in which consensus and polarization are special cases20. e Hegselmann-Krause (HK) model is dened as:
where
I
(i,x)={1≤j≤n|
x
i
−x
j
≤ε
i}
and
εi≥0
is the condence level of agent
i
. Agent
i
takes only those
agents
j
into account whose opinions dier from his own by not more than
εi
. e base case assumes a uniform
level of condence, i.e.,
εi=ε
for all agents
i
. e authors found that higher values of condence threshold
ε
lead
to consensus, while lower values lead to polarization and fragmentation. In all their runs, regardless of whether
consensus or polarization occurs, the range of opinions decreases as the simulation runs.
Fu etal. modied the HK model by dividing the population into open-minded, moderate-minded and closed-
minded agents21. ey found that the number of nal opinion clusters is dominated by the closed-minded agents;
open-minded agents cannot contribute to forming opinion consensus and the existence of open-minded agents
may diversify the nal opinions.
Cheng and Yu suggested that in many social situations, an individual’s opinion formation and expression
may be dierent because the individual feels pressured to express an opinion similar to the public opinion in the
group22. ey propose a bounded condence plus group pressure model, in which each individual forms an inner
opinion relative to the bound of condence and expresses an opinion, taking group pressure into consideration. A
group with all individuals facing group pressure always reach a consensus. In a mixed group with both pressured
and non-pressured individuals, the consensus threshold ε is signicantly reduced, and group pressure does not
always help to promote consensus; although similar to other models, in their work polarization does not occur.
Most recently, Condie and Condie classied social inuence into assimilative and dierentiative2. Assimilative
inuence occurs when opinions converge towards a consensus, or fragment into two or more converging clusters,
all within the initial range of opinions. Dierentiative inuence—the focus of our work—occurs when individuals
with very dissimilar opinions can inuence each other causing divergence towards extreme opinions (see Fig.1).
Condie and Condie proposed the Social Inuence and Event Model (SIEM)2 which builds on the HK bounded
condence model, with
ε
as condence threshold, with the following main dierences: (1) agents form a social
network, (2) an individual
i
will only change their opinion if their certainty,
Cj
,
t∈[0, 1]
, is less than the average
certainty of other individuals with which they interact at time
t
, (3) most importantly, events can inuence many
individuals synchronistically over a limited period of time. Events can have a large impact on the distribution
of opinions because their inuence acts synchronistically across a large proportion of the population, whereas
(1)
x
=x+µ·
x
′
−x
and x
′
=x
′
+µ·
x
′
−x
(2)
x
i(t+1)=|I(i,x(t))|
−1
j
∈
I(i
,
x(t))
xj(t)for t∈
T
Content courtesy of Springer Nature, terms of use apply. Rights reserved
3
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
an individual can only interact with small numbers of other individuals at any particular time. e simulation
results showed that SEIM without events exhibited the range of behaviors generated by other inuence models
under diering levels of condence threshold
ε
leading to consensus (or assimilative inuence in their deni-
tion). With the presence of strong events, when the condence threshold
ε
is high (low homophily), opinions
swing between extremes, and when the condence threshold
ε
is low (high homophily), opinions diverged into
extremes. Condie and Condie2 also introduced a measure of conict,
�Ot=SD(Oi,t)
, in the population which
they dened as the standard deviation of individual opinions
Oi,t
across the population at timestep
t
.
Building further on these ideas, Macy etal. used a general model of opinion dynamics to demonstrate the
existence of tipping points, at which even an external threat, such as a global pandemic, economic collapse,
may be insucient to reverse the self-reinforcing dynamics of partisan polarization23. Agents in the model have
initially random locations in a multidimensional issue space consisting of membership in one of two equal-
sized parties and positions on 10 issues. Agents then update their issue positions by moving closer to nearby
neighbors and farther from those with whom they disagree, depending on the agents’ tolerance of disagreement
and strength of party identication compared to their ideological commitment to the issues. ey manipulated
agents’ tolerance for disagreement and strength of party identication, and introduced exogenous shocks that
corresponds to events (following Condie and Condie2) that create a shared interest against a common threat
(e.g., a global pandemic).
ese works all demonstrate the value of this form of modelling to explore opinion dynamics, while assuming
expressed opinion is same as actual opinion.
Methods
As explored in the previous section, people may express more extreme opinions on social media compared to
their own internal beliefs, and we hypothesize that this may cause inuence across the population towards a
more positive or negative sentiment. We incorporate this notion of an agent presenting an amplied version of
their own opinion in our model, which is built on the Hegselmann–Krause (HK) bounded condence model
of opinion formation.
e Social Opinion Amplication Model(SOAM) consists of a network of individuals, where individuals
can inuence other individuals that they are connected to on the social network in relation to a specic issue.
Opinions are continuous and individuals inuence each other in each timestep.
e key innovation in our model is the concept of an expressed opinion, which for individuals who have a
tendency to amplify, is stronger than the individual’s actual opinion. is is backed up by early theories that
online opinion expression does not necessarily reect an individual’s actual opinion24 and recent literature that
people actually express stronger opinions on social media, compared to actual truth3 or hold dierent public
and private opinions25.
We make a random directed network, the most common network structure used to build synthetic social
networks26. e use of directed networks enables the representation of asymmetric relationships—individual
A may aect individual B but the reverse may not be true (in online social networks this might correspond to
B following A without reciprocation, resulting in A inuencing B, but B not inuencing A). e network com-
prises
k
average links per node; the entire network is considered, including any subnetworks following Condie
and Condie2.
An individual
i
in the network at timestep
t=0
has an initial opinion
Oi,t=0
.
e opinion of an individual
i
at timestep
t>0
is dened as:
where
Oi,t−1
is individual
i
’s opinion in the previous timestep,
Ii,t
is the set of individuals connected to individual
i
, whose expressed opinion is within the condence threshold
ε
, as per Hegselmann and Krause20:
An individual
i
’s expressed opinion is calculated as follows:
(3)
O
i,t=
Oi,t−1+
j∈I
i
,
t
SOj,t
/
1+
Ii,t
(4)
Ii
,
t
={j|
O
i
,
t−
1−SO
j
,
t−
1
≤ε
}
Time
Opinions
Time
Opinions
Time
Opinions
convergence fragmentation
divergence
Figure1. Assimilative inuence (le), bounded assimilative inuence (middle) and dierentiative inuence
(right)2.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
4
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
where
Ei,t
is whether individual
i
will amplify its opinion at timestep
t
, and
σi,t
is the individual’s amplied
amount at timestep
t
.
Table1 provides the denitions for SOAM variables at the individual level and Table2 provides the denitions
for SOAM variables at the system level. Finally, Table3 compares the main features of SOAM with the features
of similar models in the literature to illustrate the similarities and dierences and justify our design decisions.
Results
Given the hypothesis that amplied individual opinions can cause polarization, we study the eect of ampli-
cation on opinion dynamics under dierent condence thresholds. We compare the model baseline without
amplication with the model using amplication. In more detail:
1. No amplication, for condence thresholds
ε
= 0.2 and 0.8. ese condence threshold settings follow the
range of settings that was explored in Condie and Condie2. Condence threshold determines the range in
(5)
SO
i,t=
�
Oi,t−1−σi,t,if Oi,t−1<0
Oi,t−1+σi,t,if Oi,t−1≥0if Ei,t=
True
O
i
,
t−
1otherwise
Table 1. SOAM individual level variables and values.
Var i able Denition Value range
Oi,t=0
Initialized opinion of individual
i
at timestep 0, which is a random number between value range. A value of
−1.0 means a very negative opinion on the topic, and a value of 1 means a very positive opinion on the topic.
Note that for
t
> 0, there is potential for opinions to go beyond the original opinion range [−1.0, 1.0]
Oi,t>0
Opinion of individual
i
at timestep
t
> 0 [−∞, ∞]
Ei
Whether or not the individual
i
is an amplier True/False
Ei,t
Whether or not the individual
i
amplies their opinion in timestep
t
, which is True if
Ei=True and Pi,t≤Pe,
where
Pi,t
is a generated probability for that individual
i
at that timestep
t
True/False
σi,t
Amplied amount for individual
i
at timestep
t
, a random number between 0 and
s
, where
s
is the strength of
amplication, see Table2[0,
s
]
Table 2. SOAM system level variables and values.
Var i able Denition Value range
t
Timesteps [0, ∞]
n
Number of nodes [0, ∞]
k
Average links per node [0, n]
ε
Condence threshold [0, 1]
π
Proportion of the population who are ampliers [0, 1]
p
Probability of ampliers amplifying opinions [0, 1]
s
Strength of amplication [0, 1]
Table 3. SOAM features compared to literature. We only compare with models that SOAM is based on.
Model Social network How opinion is expressed Opinion update Inuence
HK bounded condence model20 Fully connected As is. What the individual thinks
and what the individual expresses
are the same
Inuence occurs only if opinions
are not too far from each other
(within condence threshold
ε
)
Each individual inuences all the
other individuals
Social Inuence and Event Model2Random network with average
link per individual of
k
= 5. A new
network is formed every timestep
As is. What the individual thinks
and what the individual expresses
are the same
An individual will only change
their opinion if their certainty is
less than the average certainty of
other individuals with which they
interact at timestep
t
An individual is inuenced by an
event if the event strength is not
too far from the individual’s opin-
ion (within
ε
) and the individual’s
condence is less than or equal to
the event strength
Inuence occurs between linked
individuals
SOAM Random network with
k
= 2, 5 or
10 average links per individual
Some individuals provide
expressed opinions that are
amplied versions of their actual
opinions
Inuence occurs only if opinions
are not too far from each other
(within
ε
)
Inuence occurs between linked
individuals
Content courtesy of Springer Nature, terms of use apply. Rights reserved
5
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
which an individual will re-adjust their opinion, i.e., an individual will re-adjust their opinion based on other
individual’s opinions if the dierence of opinion is smaller in magnitude than the condence threshold, so
a low condence threshold means that the individuals are less likely to re-adjust their opinions.
2. Amplication, where the proportion of the population who are ampliers
π
= 0.2 (low proportion of ampli-
ers) and 0.5 (high proportion of ampliers), amplication probability
p
= 0.5, amplication strength
s
=
0.5, for condence thresholds
ε
= 0.2 and 0.8.
We ran the model for number of timesteps
t
= 400, number of nodes
n
= 100, and average links per node
k
= 5, and plotted each individual’s opinion over time. Results were invariant to the number of nodes for tests up
to
n
= 1000. For clarity our plot scale ranges from −2.0 to 2.0 (doubling the initial opinion range), in order to
show extreme polarization if it exists. We plot the actual opinions and not the expressed opinions in this work
as they indicate the degree to which population opinions are truly modied—while we may all exaggerate at
times, our actions are determined by our true beliefs. Our results show that when there are no ampliers, we see
the usual convergence and fragmented convergence, and the opinion range is always within the initial range. In
other words, assimilative inuence (Fig.1(le)) and bounded assimilative inuence (Fig.1(middle)) occurred in
Fig.2a,b respectively. When there is amplication (
π
= 0.2 and 0.5), we observe that extreme polarization illus-
trated in Fig.1(right) occurs, see Fig.2c–f. When
ε
= 0.8, extreme polarization tends to occur in a single direction
(e.g., Fig.2c shows convergence to extreme negative sentiment and Fig.2e shows convergence to extreme posi-
tive sentiment), while
ε
= 0.2 results in multiple convergences of clusters, some extreme some not, see Fig.2e,f.
When
ε
= 0.2 and proportion of ampliers is 50% (
π
= 0.5), extreme polarization occurs in both positive and
negative sentiments Fig.2f. Note that the polarization more than doubles the initial opinion sentiment values,
showing extreme sentiments beyond 2.0 or lower than −2.0 (our Fig.2 opinion plot scale is from −2.0 to 2.0).
SOAM illustrates that when individuals amplify their opinion, extreme polarization occurs, and when the
proportion of ampliers are higher, extreme polarization occurs quicker and at a bigger magnitude. Although
our work here focuses on directed networks, the same ndings are evident for undirected networks. To under-
stand in more detail how variations in the parameters relating to amplication can aect the development of the
distribution of population opinion across time, we ran SOAM under a range of dierent variables and settings:
(a) Baseline: Condence threshold no amplication. How does dierent condence thresholds,
ε
, aect conict
when there is no amplication?
k
= 5, for condence thresholds
ε
= 0.2, 0.5 and 0.8.
(b) Links per node no amplication. How does the dierent average number of links per node,
k
, aect conict
when there is no amplication?
k
= 2, 5 and 10, for condence threshold
ε
=0.8, comparing with Baseline.
(c) Condence threshold with amplication. How do dierent condence thresholds,
ε
, aect conict when
there is a low proportion of ampliers?
π
= 0.2,
p
= 0.5,
s
= 0.2,
k
= 5, for condence thresholds
ε
= 0.2, 0.5
and 0.8, comparing with Baseline.
(d) Strength of amplication. How does strength of amplication,
s
, aect the results?
k
= 5,
ε
= 0.2,
π
= 0.2,
p
= 0.5,
s
= 0.2, 0.5 and 0.8, comparing with Baseline.
(e) Proportion of ampliers. How does proportion of ampliers,
π
, aect the results?
k
= 5,
ε
= 0.2,
p
= 0.5,
s
=
0.2,
π
= 0.2, 0.5 and 0.8, comparing with Baseline.
(f) Probability of amplication. How does probability of ampliers amplifying opinions,
p
, aect the results?
k
= 5,
ε
= 0.2,
π
= 0.2,
s
= 0.2,
p
= 0.2, 0.5 and 0.8, comparing with Baseline.
Similar to the previous experiment, we ran the model for number of timesteps
t
= 400, and for the number
of nodes
n
= 100. We measure and plot population conict over time, dened by Condie and Condie2 as the
standard deviation of population opinion
�Ot=SD(Oi,t)
, where
Oi,t
is the individual’s opinion at timestep
t
.
Conict is a useful measure when we wish to understand the diversity of opinions in the population, with higher
conict indicative of a broader range (which may be the result of polarization).
Our results show that conict levels decline rapidly when condence threshold
ε
is high, see Fig.3a when
ε
= 0.8, conict reached a level of zero very early on in the run, which is also consistent with the results in Fig.2a.
And vice versa, low condence threshold
ε
= 0.2 results in highest conict among the three thresholds, although
without amplication, the level of conict is only slightly above the random opinion distribution at
t
= 0. Reduc-
ing the average number of links per node increases conict, Fig.3b shows that with a very low average number
of links per node of
k
= 2, conict remains steady at around 0.3, while
k
= 5 and 10, conict reduces to zero. We
can see that in Fig.3c, lower condence threshold results in higher conict, this similar to Fig.3a where there
is no amplication. When there is amplication, low and medium levels of condence threshold
ε
= 0.2 and
0.5, results in conict levels that continue to increase as time progresses, reaching approximately 1.0for
ε
= 0.5
and 1.25 for
ε
= 0.2. InFig.3d,
s
= 0.2 is the only setting where conict increases over time, while
s
= 0.5 and
0.8 maintains conict as the same level as the start, this is because a high strength of amplication may make
the opinion so extreme that others can no longer relate to it (it falls outside the condence threshold) and so
no longer inuences others. Increasing the proportion of ampliers increases conict, with
π
= 0.8 more than
double the original conict level at
t
= 150 (Fig.3e). Increasing the probability of amplication increases conict,
Fig.3f shows that at
p
= 0.5 and 0.8, conict reaches more than 1.0.
Countering extreme polarization
SOAM suggests that opinion amplication, even with low occurrence, and especially with low amplication can
cause extreme polarization in the population. Given that such polarization is also evident in real world social
networks, in this section we examine potential methods to counter, prevent, or even reverse extreme polarization.
Recent research on polarisation mitigation and opinion control suggests various approaches, for example, Musco
Content courtesy of Springer Nature, terms of use apply. Rights reserved
6
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Figure2. Opinions of individuals starting from a random distribution [−1.0, 1.0] under a range of conditions.
Red dots denote individuals who are amplifying in that timestep. Grey areas indicate opinions outside the initial
opinion range. Y-axis shown from −2.0 to 2.0 (double the initial opinion range) for clarity; in (c) to (f), opinions
exceed this range and become even more extreme.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
7
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Figure3. Conict (
Ot
) tracked over 400 timesteps, starting from a uniform random distribution of opinions
(
�
Ot
=
1
/√
3
=0.577
), indicated with a grey dotted line in each chart. Shown are dependencies on: (a)
condence threshold when there were no amplications; (b) average number of connections per individual
per timestep when there were no amplications; (c) condence threshold when there were amplications; (d)
strength of amplication; (e) proportion of ampliers; and (f) probability of amplication.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
8
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
etal. studied ways to minimise polarisation and disagreement in social networks27, Garimella etal. suggests
controversy can be reduced by connecting opposing views28, Rossi etal. studied closed loops between opinion
formation and personalised recommendations29, while Matakos etal. proposed a recommender-based approach
to break lter bubbles in social media30. Here we examine two techniques in use today by social networks to see
their eectiveness in our model.
Countering method 1: the ve‑strike system. A common strategy used by online social networks is
to stop users from posting aer a number of oenses that disobey their rules. (For example, Twitter’s medical
misinformation policy (https:// help. twitt er. com/ en/ rules- and- polic ies/ medic al- misin forma tion- policy) has a
strike system, each violation of the policy count as a strike, and 5 or more strikes results in a permanent suspen-
sion of the Twitter account.) We implement this policy by detecting amplied posts and if a user amplies more
than 5 times, they are no longer allowed to post further. To study the eect, we add a max amplify parameter, so
that each agent can only amplify 5 times. Once they exceed this number, they are removed. In order to keep the
population constant, we replace a removed individual with a new individual with the same default probabilities
and random opinion.
We ran the model with the same amplication settings in the previous section, timesteps
t
= 400, number
of nodes
n
= 100, and average links per node
k
= 5, with the proportion of the population who are ampliers
π
= 0.5, amplication probability
p
= 0.5, amplication strength
s
= 0.5, for condence thresholds
ε
= 0.2 and 0.8.
We compare a normal run of SOAM with a run that uses the maximum amplify intervention, both starting from
the same random seed to ensure the composition of the initial populations are identical.
While extreme polarization occurred in Fig.4a, aer applying the curbing method, Fig.4b shows that extreme
polarization no longer occurs. e same results can be seen for Fig.4c versus Fig.4d.
Figure4. Opinions of individuals starting from a random distribution [−1.0, 1.0] with corresponding conict
plots. Red dots denote individuals who are amplifying in that timestep. Grey areas indicate opinions outside the
initial opinion range.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
9
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Countering method 2: disseminating balanced opinions. A second approach to countering extreme
opinions on social networks is for institutions to disseminate balanced opinions about any given topic. We model
this by introducing an additional 5 random external opinions at every other timestep, which are randomly
spread across the initial range [−1.0, 1.0], representing a normal range of non-polarized opinions. Every indi-
vidual has access to this same set of opinions, every other timestep, and the same condence threshold aspect
applies, where people are only inuenced by opinions that are within the condence threshold. is simulates
institutions exercising correctional behaviors of countering misinformation by publicly disseminating less polar-
ized opinions across the original range of “normal” opinions. For example, the Science Media Centre (https://
www. scien cemed iacen tre. org/ about- us/) explicitly aims to disseminate “accurate and evidence-based informa-
tion about science and engineering through the media, particularly on controversial and headline news stories
when most confusion and misinformation occurs”. We ran the model with the same settings as before, timesteps
t
= 400, number of nodes
n
= 100, and average links per node
k
= 5, with the proportion of the population who
are ampliers
π
= 0.5, amplication probability
p
= 0.5, amplication strength
s
= 0.5, for condence thresholds
ε
= 0.2 and 0.8. Again, we compare a normal run of SOAM with a run that uses the intervention, both starting
from the same random seed to ensure the composition of the initial populations are identical.
While extreme polarization occurred in Fig.5a,b shows that extreme polarization no longer happens, although
some opinions are slightly outside the original. e same results can be seen for Fig.5c versus Fig.5d. is shows
that with a small but consistent intervention, extreme polarization can be curbed.
Figure5. Opinions of individuals starting from a random distribution [−1.0, 1.0] with corresponding conict
plots. Red dots denote individuals who are amplifying in that timestep. Green dots denote external unpolarised
opinions.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
10
Vol:.(1234567890)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
Discussion
Research has previously shown that extreme polarization can be caused by strong external events impacting
populations2. Our model suggests another factor: that extreme polarization can be caused by individuals simply
amplifying their own opinions. We demonstrate for the rst time that this simple idea results in extreme polariza-
tion. SOAM shows us that some common trends in recent communication amongst a minority can aect entire
populations. Whether spin, sensationalism, clickbaits, hype, sentiment polarity, or even “fake news”, when opin-
ions are amplied by a few, extreme polarization of many can result. is nding is consistent despite network
models (the connections between individuals). In our experiments, we used a random network (specically the
Erdös-Rényi model), as it is most commonly used in existing literature to model social networks26. When we
use SOAM with other network models that represent plausible network structures for online social networks:
scale-free network31 and Barabasi-Albert network32, we nd consistent results: extreme polarization occurred
with all of them. Likewise, when we increase the number of connections k (i.e., to model online networks where
people have numerous connections) we observe the same result.
Extreme polarization can cause several harmful eects. We explored two methods to address polarization
caused by opinion amplication: preventing individuals from amplifying more than ve times, and ensuring a
consistent communication of opinions with sentiments that fall in a normal range. Both approaches showed that
polarization can be curbed. is result is consistent with real world ndings. For example, one study showed that
Fox News viewers who watched CNN instead for 30days became more skeptical of biased coverage33.
We reran the ve-strike curbing method on the other types of networks and found that it curbed extreme
polarization equally eectively. When we reran the disseminating unpolarized opinion method, we found that
the Barabasi-Albert network32 requires either more frequent dissemination (e.g., daily) or higher number of mes-
sages (e.g., instead of 5 messages, 10, or 20), and sometimes both depending on the runs, to be eective—more
connected individuals receive more extreme opinions and therefore may need stronger normal-range messaging
to counter this.
It is always tempting for us to speak louder to be heard in a crowd. But when some begin to shout, others feel
they must also shout. And when everyone attempts to out-shout everyone else, the result can be a screaming
mob, all vocalizing at the top of their lungs. In a social network, the volume of sentiment can become amplied
in the same way, which can result in groups with extreme polarized opinions. But this form of polarization is
participatory and voluntary. If we choose to temper our expressed opinions, if we lower our voices and speak
normally instead of screaming, then perhaps we might help provide that much-needed balance of normal senti-
ment to society, helping curb extremism for all.
Data availability
Data and code are available at http:// www. cs. ucl. ac. uk/ sta/S. Lim/ polar izati on.
Received: 8 June 2022; Accepted: 20 October 2022
References
1. Tucker, J. A. et al. Social media, political polarization, and political disinformation: A review of the scientic literature (March 19,
2018). SSRN. https:// ssrn. com/ abstr act= 31441 39. (2018).
2. Condie, S. A. & Condie, C. M. Stochastic events can explain sustained clustering and polarisation of opinions in social networks.
Sci. Rep. 11, 1355 (2021).
3. Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W. & Starnini, M. e echo chamber eect on social media. Proc. Natl.
Acad. Sci. U.S.A. 118, e2023301118 (2021).
4. Peck, A. A problem of amplication: Folklore and fake news in the age of social media. J. Am. Folk. 133, 329–351 (2020).
5. Patro, J. et al. Characterizing the spread of exaggerated news content over social media. arXiv: 1811. 07853 (2018).
6. Yavchitz, A. et al. Misrepresentation of randomized controlled trials in press releases and news coverage: A cohort study. PLoS
Med. 9, e1001308 (2012).
7. Molek-Kozakowska, K. Towards a pragma-linguistic framework for the study of sensationalism in news headlines. Discourse
Comm. 7, 173–197 (2013).
8. Chakraborty, A., Paranjape, B., Kakarla, S. & Ganguly, N. Stop clickbait: detecting and preventing clickbaits in online news media.
In Proceedings of International Conference on Advanced Social Network Analysis Mining 9–16 (2016).
9. Deng, X. & Chen, R. Sentiment analysis based online restaurants fake reviews hype detection. Web Technologies and Applications:
APWeb 2014. Lecture Notes in Computer Science 8710 (2014).
10. Preskey, N. Inuencers warned not to use lters to exaggerate eects of beauty products they’re promoting, https:// www. indep endent.
co. uk/ life- style/ insta gram- beauty- produ cts- spon- advert- rules- b1796 894. html (2021).
11. MacIsaac, S., Kelly, J. & Gray, S. ‘She has like 4000 followers!’: e celebrication of self within school social networks. J. You th
Stud. 21, 816–835 (2018).
12. Adams, R. C. et al. How readers understand causal and correlational expressions used in news headlines. J. Exp. Psychol. Appl. 23,
1–14 (2017).
13. Devitt, A. & Ahmad, K. Sentiment polarity identication in nancial news: A cohesion-based approach. In Proceedings of the 45th
Annual Meeting of the Association of Computational Linguistics 984–991 (2007).
14. Zhang, X. & Ghorbani, A. A. An overview of online fake news: Characterization, detection, and discussion. Inf. Process. Mgmt.
57, 102025 (2020).
15. van Prooijen, J. W. & Douglas, K. M. Belief in conspiracy theories: Basic principles of an emerging research domain. Eur. J. Soc.
Psychol. 48, 897–908 (2018).
16. French, J. R. Jr. A formal theory of social power. Psychol. Rev. 63, 181 (1956).
17. DeGroot, M. H. Reaching a consensus. J. Am. Stat. Assoc. 69, 118–121 (1974).
18. Lehrer, K. & Wagner, C. Rational Consensus in Science and Society: A Philosophical and Mathematical Study 165 (Springer, 2012).
19. Deuant, G., Neau, D., Amblard, F. & Weisbuch, G. Mixing beliefs among interacting agents. Adv. Complex Syst. 3, 11 (2001).
20. Hegselmann, R. & Krause, U. Opinion dynamics and bounded condence models, analysis, and simulation. J. Artif. Soc. Soc. Simul.
5, 33 (2002).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
11
Vol.:(0123456789)
Scientic Reports | (2022) 12:18131 | https://doi.org/10.1038/s41598-022-22856-z
www.nature.com/scientificreports/
21. Fu, G., Zhang, W. & Li, Z. Opinion dynamics of modied Hegselmann–Krause model in a group-based population with hetero-
geneous bounded condence. Phys. A Stat. Mech. Appl. 419, 558–565 (2015).
22. Cheng, C. & Yu, C. Opinion dynamics with bounded condence and group pressure. Phys. A Stat. Mech. Appl. 532, 121900 (2019).
23. Macy, M. W., Ma, M., Tabin, D. R., Gao, J. & Szymanski, B. K. Polarization and tipping points. Proc. Natl. Acad. Sci. U.S.A. 118,
e2102144118 (2021).
24. McDevitt, M., Kiousis, S. & Wahl-Jorgensen, K. Spiral of moderation: Opinion expression in computer-mediated discussion. Int.
J. Public Opin. Res. 15, 454–470 (2003).
25. Jarema, M. & Sznajd-Weron, K. Private and public opinions in a model based on the total dissonance function: A simulation study.
Computational Science—ICCS 2022. Lecture Notes in Computer Science 146–153 (2022).
26. Amblard, F., Bouadjio-Boulic, A., Gutiérrez, C. S. & Gaudou, B. Which models are used in social simulation to generate social
networks? A review of 17 years of publications in JASSS. In Proceedings of 2015 Winter Simulation Conference 4021–4032 (2015).
27. Musco, C., Musco, C. & Tsourakakis, C. E. Minimizing polarization and disagreement in social networks. In Proceedings of the
World Wide Web Conference 369–378.
28. Garimella, K., De Francisci Morales, G., Gionis, A. & Mathioudakis, M. Reducing controversy by connecting opposing views. In
Proceedings of the 10th annual ACM International Conference on Web Search and Data Mining 81–90.
29. Rossi, W. S., Polderman, J. W. & Frasca, P. e closed loop between opinion formation and personalised recommendations. IEEE
Trans. Control Netw. Syst. 9, 1092–1103 (2021).
30. Matakos, A., Tu, S. & Gionis, A. Tell me something my friends do not know: Diversity maximization in social networks. Knowl.
Inf. Syst. 62, 3697–3726 (2020).
31. Bollobás, B., Borgs, C., Chayes, J. T. & Riordan, O. Directed scale-free graphs. In Proceedings of 14th Annual ACM-SIAM Symposium
on Discrete Algorithms 132–139 (2003).
32. Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
33. Broockman, D. & Kalla, J. e manifold eects of partisan media on viewers’ beliefs and attitudes: A eld experiment with Fox
News viewers. OSF Preprints (2022).
Author contributions
Both authors designed the study, conceptualized the model, analyzed the results and wrote the manuscript. S.L.L
also coded and ran the model.
Competing interests
e authors declare no competing interests.
Additional information
Correspondence and requests for materials should be addressed to S.L.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
Open Access is article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. e images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
© e Author(s) 2022, corrected publication 2022
Content courtesy of Springer Nature, terms of use apply. Rights reserved
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com