PreprintPDF Available

Biased processing and opinion polarisation: experimental refinement of argument communication theory in the context of the energy debate

Preprints and early-stage research may not have been peer reviewed yet.
Preprint

Biased processing and opinion polarisation: experimental refinement of argument communication theory in the context of the energy debate

Abstract and Figures

We combine empirical experimental research on biased argument processing with a computational theory of group deliberation in order to clarify the role of biased processing in debates around energy. The experiment reveals a strong tendency to consider arguments aligned with the current attitude more persuasive and to downgrade those speaking against it. This is integrated into the framework of argument communication theory in which agents exchange arguments about a certain topic and adapt opinions accordingly. We derive a mathematical model that allows to relate the strength of biased processing to expected attitude changes given the specific experimental conditions and find a clear signature of moderate biased processing. We further show that this model fits significantly better to the experimentally observed attitude changes than the neutral argument processing assumption made in previous models. Our approach provides new insight into the relationship between biased processing and opinion polarization. At the individual level our analysis reveals a sharp qualitative transition from attitude moderation to polarization. At the collective level we find (i.) that weak biased processing significantly accelerates group decision processes whereas (ii.) strong biased processing leads to a persistent conflictual state of subgroup polarization. While this shows that biased processing alone is sufficient for polarization, we also demonstrate that homophily may lead to intra-group conflict at significantly lower rates of biased processing.
Content may be subject to copyright.
Biased processing and opinion polarisation: experimental
refinement of argument communication theory in the context
of the energy debate
Sven Banisch1* and Hawal Shamon2
Abstract
We combine empirical experimental research on biased argument processing with a computational theory of group
deliberation in order to clarify the role of biased processing in debates around energy. The experiment reveals a strong
tendency to consider arguments aligned with the current attitude more persuasive and to downgrade those speaking
against it. This is integrated into the framework of argument communication theory in which agents exchange arguments
about a certain topic and adapt opinions accordingly. We derive a mathematical model that allows to relate the strength
of biased processing to expected attitude changes given the specific experimental conditions and find a clear signature
of moderate biased processing. We further show that this model fits significantly better to the experimentally observed
attitude changes than the neutral argument processing assumption made in previous models. Our approach provides
new insight into the relationship between biased processing and opinion polarisation. At the individual level our analysis
reveals a sharp qualitative transition from attitude moderation to polarisation. At the collective level we find (i.) that weak
biased processing significantly accelerates group decision processes whereas (ii.) strong biased processing leads to
a persistent conflictual state of subgroup polarisation. While this shows that biased processing alone is sufficient for
polarisation, we also demonstrate that homophily may lead to intra-group conflict at significantly lower rates of biased
processing.
Keywords
biased processing, attitude change, polarisation, experimental calibration, argument persuasion, group deliberation,
opinion dynamics, energy debate
1 Introduction
Social processes can currently be observed around the world
in which controversies over various issues are coming to
a head. For example, while some members of society are
strongly in favour of a political decision-maker, others
are strongly opposed to the same political leader (e.g.
Trump, Erdogan, Putin, Lukashenko). The same processes
can be identified worldwide for other objects of attitude,
such as migration movements, measures to contain the
COVID pandemic or climate change and its cause(s).
These developments hold potential for danger, as they
threaten international but also intranational social cohesion.
It is therefore all the more important to understand the
mechanisms of such processes in detail.
A variety of theoretical models have been developed
to understand the mechanism behind the emergence
of consensus, polarisation and conflict over opinion.
Theoretical approaches such as social influence network
theory (Friedkin 1999;Friedkin and Johnsen 2011)
and social feedback theory (Banisch and Olbrich 2019;
Gaisbauer et al. 2020) put a primary focus on how
the structure of social networks impacts the dynamical
evolution of attitudes in a group or a population. It is
well known from this research that network segregation
and community structure favor diversity and polarisation.
Other computational studies explain (sub)group polarisation
based on the homophily principle (Lazarsfeld and Merton
1954;McPherson et al. 2001) by which the propensity
of social exchange depends on the similarity of opinions.
These models show that preferences for interaction with
similar others may lead to persistent plurality (Axelrod
1997;Hegselmann et al. 2002;Banisch et al. 2010) and
polarisation (M¨
as and Flache 2013;Banisch and Olbrich
2021). One contribution of this paper is to show that
neither structural faultlines nor homophily are necessary
for collective polarisation. The intra-individual tendency of
biased processing alone is sufficient.
Our theoretical model is based on argument communi-
cation theory (ACT) advanced by (M¨
as and Flache 2013).
The main idea is that an opinion is a multi-level construct
comprised of an attitude layer and an underlying set of
arguments (cf. Banisch and Olbrich 2021). In repeated inter-
action agents exchange pro and con arguments about an
attitude object and adjust their attitudes accordingly. If this
process of argument exchange is coupled with homophily
at the level of attitudes this gives rise to the formation of
two increasingly antagonistic groups which rely on more
and more separated argument pools (Sunstein 2002). As a
consequence group opinions become more and more con-
centrated at the extremes of the opinion scale. ACT has
1Max Planck Institute for Mathematics in the Sciences, Leipzig,
Germany
2Institute of Energy and Climate Research - Systems Analysis and
Technology Evaluation (IEK-STE), Forschungszentrum J¨
ulich, Germany
*Corresponding author (sven.banisch@universecity.de)
Prepared using sagej.cls [Version: 2017/01/17 v1.20]
2Journal Title XX(X)
proven very useful to understand the impact of opinion
diversity and demographic faultlines in group deliberation
processes (M¨
as et al. 2013;Feliciani et al. 2020) and is
also capable to explain how opinions on multiple interre-
lated topics may align along ideological lines (Banisch and
Olbrich 2021). The main contribution of this paper is to
propose and experimentally validate a refined mechanism
of argument exchange that incorporates biased information
processing and to show that the group-level predictions of
ACT are fundamentally affected when these refined micro-
assumptions are incorporated.
Biased argument processing – also labeled as biased
assimilation (Lord et al. 1979;Corner et al. 2012;Kobayashi
2016), defensive processing (Wood et al. 1995), refutational
processing (Liu et al. 2016) or attitude congruence bias
(Taber et al. 2009) in the literature – refers to a persons
tendency to inflate the quality of arguments that align with
his or her existing attitude on an attitude object whereas
the quality of those arguments that speak against a persons
prevailing attitude are downgraded. A number of empirical
studies (cf. e.g. Biek et al. 1996;Teel et al. 2006;Corner et al.
2012;Kobayashi 2016;Shamon et al. 2019) across different
topics and samples have shown that biased processing is a
robust cognitive mechanism whenever persons are exposed
to a set of opposing arguments on attitude objects. In
order to integrate this intra-personal tendency of attitude-
dependent argument processing, we rely on an empirical
study in the context of climate change, and energy production
in particular (Shamon et al. 2019). In this experiment,
attitudes towards six different energy technologies (coal
power stations, wind turbines, etc.) are measured before and
after subjects are exposed to a balanced set of 7 pro and 7
con arguments. Subjects are asked to rate the persuasiveness
of arguments and their judgements reveal a systematic bias
towards attitude-coherent arguments. Our cognitive model
assumes that this biased evaluation of arguments affects the
probability with which arguments are taken up by an agent
to a certain degree β. For a scenario which mimics the
experimental design of Shamon et al. (2019) as closely as
possible, we can derive a statistical model of the expected
effects of the argument treatment in which the free model
parameters have a clear meaning in terms of mechanisms.
We show that this cognitive model fits significantly better to
the experimentally observed attitude changes than the neutral
argument processing assumption made in all previous ACT
models.
Such a close alignment of a computational model of
information processing and an experiment on argument
persuasion sheds light on the relation between biased
processing and attitude polarisation in a variety of ways.
First, our theoretical model provides a clear understanding of
the relation between biased processing and attitude change at
the individual level. Empirical studies (e.g. Lord et al. 1979;
Taber and Lodge 2006;Taber et al. 2009;Druckman and
Bolsen 2011;Corner et al. 2012;Teel et al. 2006;Shamon
et al. 2019) repeatedly examined whether or not biased
processing of balanced arguments contributes to polarisation
tendencies. Empirical evidence is mixed: while some studies
find support for attitude polarisation as a consequence of
exposure to conflicting arguments (Taber and Lodge 2006;
Taber et al. 2009;Lord et al. 1979;McHoskey 1995),
other studies report no evidence (e.g. Teel et al. 2006;
Druckman and Bolsen 2011;Corner et al. 2012;Shamon
et al. 2019). Unfortunately, it is difficult to say as to why
those empirical studies find mixed evidence on the issue,
because the conceptual and methodological heterogeneity
applied in the studies does not allow to draw systematical
conclusions (cf. Shamon et al. 2019, p. 108). Hence, despite
the fact that biased processing has been shown to be a
relatively robust cognitive mechanism, empirical evidence
on its consequences for attitude change is ambiguous. Our
approach takes into account that biased processing may
come in degrees (β) and shows that the question of attitude
moderation versus polarisation crucially depends on how
strongly subjects engage in biased processing. When subjects
are exposed to a balanced mix of pro and con arguments there
is a sharp qualitative transition from attitude moderation to
attitude polarisation when βcrosses a critical value. That
is, attitude polarisation at the individual level requires a
sufficient level of biased processing.
Secondly, the close connection of experiment and
theoretical model advanced in this paper provides a
method to assess the strength of biased processing β
from experimental data. This is important since biased
processing might come in different degrees across different
issues. Indeed, our experiment (Shamon et al. 2019)
addresses attitudes towards six different technologies for
energy production and we find a clear signature of biased
processing in all of them. However, there are differences
in strength: while attitude data on gas and biomass shows
a weak bias clearly below the critical point between
attitude moderation and polarisation, arguments on coal,
wind (onshore and offshore), and solar power are subject
to stronger biases above the critical value. The refined
version of ACT would suggest that a group deliberation on
one of the former two technologies is less prone to yield
dissent compared to the latter four topics. The approach
hence allows to calibrate the microscopic mechanisms of
argument exchange employed in ACT with respect to the
specific topic addressed in an balanced–argument persuasion
experiment. It enables a systematic approach of experiment
and theoretical refinement.
Thirdly, incorporated into a computational theory of group
deliberation such as ACT, we can address the implications
of biased processing at the collective level of groups or
populations. Previous modeling work incorporating biased
processing has shown that biased assimilation coupled with
homophily may generate patterns of collective polarisation
if the bias is sufficiently strong (Dandekar et al. 2013).
Dandekar et al. (2013) model biased processing in such
a way that it ”mathematically reproduces the empirical
findings of Lord et al. (1979)” (Dandekar et al. 2013,
p. 5793). However, they miss to describe the theoretical
micro process that underlies information processing as well
as resulting attitude changes in detail and conclude that
homophily alone is not sufficient for polarisation. This is
in disagreement with one of the main results of ACT (M¨
as
and Flache 2013) which demonstrated that homophily alone
may explain polarisation under positive social influence with
unbiased argument adoption. We integrate biased processing
into the framework of ACT to obtain a clearer picture of
its collective level implications. We show that weak biased
Prepared using sagej.cls
3
processing leads to a very efficient process in which a
group jointly supports one alternative whereas strong biased
processing leads to an intermediate phase in which two
subgroups with strongly opposing views emerge. This bi-
polarisation phase becomes exponentially more persistent
with an increase in processing bias. Thus, we show that
in the absence of other mechanisms, attitude polarisation
at the individual level is a prerequisite for collective bi-
polarisation. Homophily is not necessary but accelerates the
polarisation process and stabilizes a conflictual, bi-polarized
group situation.
2 Experiment
In 2017, 1078 persons participated in an online survey exper-
iment designed to assess the impact of biased processing on
attitude change regarding electricity generating technologies.
(Shamon et al. 2019). In this experiment, respondents atti-
tudes towards six technologies were measuredboth before
(initial attitudes) and after (posterior attitudes) the presenta-
tion of 14 arguments on one of the six technologies (Setting
1: coal power stations; Setting 2: gas power stations; Setting
3: wind power stations (onshore); Setting 4: wind power sta-
tions (offshore); Setting 5: open-space photovoltaic; Setting
6: biomass power plants). The set of arguments was balanced
in the sense that it comprised seven arguments speaking
in favor (pro arguments) and seven arguments speaking in
disfavor (counter arguments) of the respective technology.
Respondents were asked to rate each argument’s persuasive-
ness as well as to state their perceived familiarity with each
argument.The research design allowed to assess not only
to what extent initial attitudes affect persuasiveness ratings
of arguments but also to what extent respondents initial
attitudes change after the exposure to the balanced set of 14
arguments.
The experiment provides empirical evidence that persons
engagement in biased processing depends systematically
on their initial attitude. Figure 1shows the extent to
which the distributions of respondents’ balance of argument
ratings§depend on their initial attitudes towards the
respective technology that was focused in the 14 arguments
(hereinafter referred to as focused attitudes). The majority
of respondents with initial negative focused attitudes rated
counter arguments as more persuasive than pro arguments
and the majority of respondents with initial positive focused
attitudes rated pro arguments as more persuasive than
counter arguments. This pattern is perfectly in line with
theoretical considerations on biased processing according to
which persons tend to inflate the quality of those arguments
that conform to their initial attitude and deflate the quality
of those arguments that do not conform their initial attitude.
Among persons with an initial negative as well as persons
with an initial positive focused attitude, the persuasiveness
balance is biggest in absolute terms at the extreme points
of the attitude scale while it is modest among respondents
with an initial neutral attitude. Hence, Shamon et al. (2019)
conclude that respondents process arguments biasedly and
their engagement in biased processing increases with the
extremity of their attitudes.
While the subjective ratings of argument persuasiveness
confirm systematic biases in the evaluation of arguments,
-4 -3 -2 -1 0 1 2 3 4
Figure 1. Balance of argument ratings as a function of the
initial focused attitude.
it is of great practical concern how the actual attitudes
change after exposure to a balanced set of arguments not
clearly in favour or against a certain issue. If attitudes
become generally more extreme after exposure to balanced
information, the use of arguments in a societal debate would
likely broaden the gap between supporters and opponents of
different energy technologies (cf. Shamon et al. 2019). For
this reason, a lot of experimental research has been invested
on answering the question whether biased processing implies
attitude polarisation when subjects are exposed to conflicting
arguments but cannot easily be answered on the basis of
empirical evidence due to the conceptual and methodological
heterogeneity (see Introduction).
In order to obtain a more nuanced picture of attitude
change under conflicting arguments Shamon et al. (2019)
suggest to consider dynamics at the individual level by
examining transition probabilities conditioned on the initial
focused attitude. That is, the patterns of attitude change
are considered independently for subjects with a negative,
a neutral and a positive initial attitude. Induced attitude
changes, in turn, are categorized with respect to polarisation
Initial and posterior attitudes towards electricity generating technologies
were measured on a nine-point scale (0: strongly against the technology; 4:
neither against nor in favor of the technology; 8: strongly in favor of the
technology), whereas respondents were also offered an exit option (cannot
choose).
The 84 (=14*6) arguments were developed by an interdisciplinary
expert team consisting of engineers and physicists, economists, and social
scientists at the Institute of Energy and Climate Research Systems Analysis
and Technology Evaluation (IEK-STE) at Forschungszentrum Jlich.
Respondents persuasiveness ratings were registered for each argument on
a nine-point scale (0: the argument is not at all persuasive; 8: the argument is
very persuasive). Next to the persuasiveness rating scale, respondents could
state their perceived familiarity with each of the 14 arguments (0: I am not
aware of this argument; 1: I am aware of this argument).
§For a respondent the balance of argument ratings are calculated by
subtracting his or her average persuasiveness rating for the seven counter
arguments from the average persuasiveness rating for the seven pro
arguments. Hence, a persuasiveness balance ranges from -8 (meaning that a
respondent rated all seven counter arguments with 8 while rating all pro
arguments with 0) to +8 (meaning that a respondent rated all seven pro
arguments with 8 while rating all counter arguments with 0).
Prepared using sagej.cls
4Journal Title XX(X)
(more extreme), persistence (unchanged) and moderation
(less extreme). This reveals that both attitude polarisation
and moderation may occur simultaneously at the individual
level and that these effects may average out at the
aggregate level of the entire population. While the analysis
in Shamon et al. (2019) allows for a more fine-grained
understanding of the role of attitude extremity and its impact
on biased processing, it still remains puzzling what degree of
biased processing is required for the emergence of attitude
polarisation.
In this paper we bring the analysis of attitude-dependent
attitude changes to a higher level of sophistication by
deriving a statistical model for the full distribution of
conditional attitude change based on cognitive principles.
This allows us to vary the strength of biased processing
and to determine how well empirically observed attitude
changes are matched by a specific value. Starting from the
cognitive structure that underlies ACT, we incorporate biased
argument adoption and analyse the attitude changes that
would be expected under the given experimental conditions
(exposure to a balanced set of arguments). We account for
the strength of biased processing by a parameter βwhich
governs the extent to which evaluation biases (Figure 1)
lead to biases in argument adoption. This makes explicit,
among other things, that attitude persistence at the extreme
ends of the attitude scale is indicative of a rather strong
processing bias contributing to a global pattern of attitude
polarisation. We show that there is a sharp transition from
attitude moderation to polarisation as βincreases rendering
in-principle statements that biased processing leads to
attitude polarisation somewhat ill-posed. Most importantly,
as the processing bias βmay depend on the issue under
investigation one can expect attitude moderation in some and
polarisation in other cases.
3 A cognitive model of biased argument
processing
3.1 Attitude structure
While the majority of computational opinion models treats
the opinion as an atomic unit, argument-based models (M¨
as
and Flache 2013;Banisch and Olbrich 2018) operate with
a representation of opinions that takes some degree of
cognitive complexity into account. Individuals usually hold
concrete or abstract beliefs on attitude objects that imply a
positive or negative evaluation of the attitude object and form
an important basis for attitudes (Eagly and Chaiken 1993).
The extent to which positive (or negative) connoted beliefs
outweigh negative (or positive) beliefs on an attitude object
in an indivudual’s belief system, determines in tendency the
valence (positive or negative) and extremity of a person’s
attitude. Hence, ignoring this formative structure of attitudes
may lead to the fact that essential mechanisms cannot be
identified. In ACT, this is modelled by a set of arguments that
support either a positive (pro) or a negative standing (con)
towards the issue at question. Agents can either believe and
therefore adopt an argument or reject it and the net number
of pro- and con-arguments determines the overall attitudinal
judgement. That is, an agent’s attitude towards an issue (an
electricity production technology in our case) is positive to
the extent to which the number of pro-arguments exceeds the
number of con-arguments in its belief system. This setting
is shown in Figure 2along with four example argument
configurations and the respective attitude.
attitude
on technology
(coal, gas, onshore, oshore,
photovoltaic, biomass)
pro
arguments
contra
arguments
}
}
3
0 10
1 0
1
1 1
}
}
4
1 01
1 1
0
0 0
}
}
1
1 10
0 1
0
0 0
}
}
0
1 00
0 1
1
0 1
}
}
Figure 2. Structure of opinions assumed by ACT (left) and four
example configurations (right). Sets of pro- and con-arguments
are assumed to underlie the attitudes towards different issues
(energy production technologies). Single arguments can either
be believed (1) or not (0). The numbers of pro- and
con-arguments that an agent beliefs in determine the attitude
towards the focus issue.
Formally, let us denote the number of possible pro- and
con-arguments by N+and Nrespectively. We denote a
single argument by aiwhere iis used to index the set of
all arguments. Following earlier models (M¨
as and Flache
2013;Banisch and Olbrich 2021), we assume that only two
values are possible for each argument: ai= 1 indicates that
the argument is believed, and ai= 0 that it is rejected. We
further denote by eithe evaluative contribution of argument i
to the attitude. The ei’s are one for pro-arguments and minus
one for con-arguments. An agent’s opinion ois then given by
o=X
i
aiei=n+n(1)
with n+, nthe number of currently held pro- and con-
arguments respectively. On the right hand side of Figure
2four different argument configurations are shown along
with the resulting opinion for a setting with N+=N=
4. Maximal support (o= +4) is obtained when agents
believe in all pro- and no con-argument. A maximally
negative opinion (o=4) means that all con-arguments are
considered valid and pro-arguments are rejected. Equation 1
hence leads to opinions on a nine-point scale from -4 to +4,
in agreement with the attitude scale used in the experiment.
3.2 Biased argument evaluation
The experiment described in Shamon et al. (2019) has
revealed a linear relationship between the current attitude
and the evaluation of argument persuasiveness (see Figure
1). One explanation for this phenomenon is that persons
with a positive or negative attitude show the motivation
to produce defensive responses to attitude incompatible
arguments while they are motivated to develop favourable
thoughts on attitude-consistent arguments (Kunda 1990;
Petty and Cacioppo 1986). Another explanation for this
argument evaluation bias might be seen in individuals
striving for cognitive coherence (Festinger 1957;Thagard
and Verbeurgt 1998).
Prepared using sagej.cls
5
To see this, let us regard the attitude structure described
above as a simple cognitive network comprised of beliefs
and a single attitude node which are linked by evaluative
associations. We can define the coherence of a cognitive
configuration made up by a specific argument string a
and an opinion oby the net number of attitude-coherent
versus attitude-incoherent evaluative associations weighted
by attitude strength
C(a, o) = 1
2X
i
(2ai1)eio=1
2X
i
(2ai1)ei(n+n).
(2)
The transformation (2ai1) leading from {0,1}to {−1,1}
is introduced because we want to take into account the
contribution of rejected arguments ai= 0 and the prefactor
1/2is introduced for the respective normalization. If an
agent is exposed to a new argument a0
i, we assume that
the evaluation V(a0
i)of it depends on whether a0
ileads to
an increase or decrease in cognitive coherence, that is, on
the difference C(a0, o)C(a, o). For the opinion structure
described above this yields
V(a0
i) = C(a0, o)C(a, o) = (a0
iai)ei(n+n).
(3)
In other words, the evaluation of a new pro-argument (i.e.
ei= 1) is a linear function of the current opinion with
V(a0
i) = o= (n+n). A new counter argument (ei=
1), conversely, is evaluated as V(a0
i) = o= (nn+).
This aligns well with the linear relationship between initial
attitude and bias in the rating of arguments that has been
identified in the experiment (Figure 1).
3.3 Biased argument adoption
If an agent is exposed to a new argument (a0
i∈ {0,1})
either from peers (Section 6) or in an experimental treatment
(Section 4) (s)he may adopt the argument or not. In
current implementations of ACT (M¨
as and Flache 2013;
Feliciani et al. 2020;Banisch and Olbrich 2021) that do not
incorporate intra-personal processing biases, this adoption
probability (denoted as p) is homogeneous and independent
of the current attitude (p= 1/2). Biased processing posits
that the probability of argument assimilation depends on
the current opinion (i.e. p(o)) in such a way that attitude-
coherent arguments are adopted with a high probability
(p(o)>1/2) whereas this probability is reduced if the
argument is incoherent with the opinion (p(o)<1/2).
We are, however, not rational optimisers of cognitive
coherence but largely unconscious processes drive changes
in our cognitive system. It would be highly implausible to
assume that individuals with a negative attitude will never
accept a pro argument. Biased processing as conceptualised
here in terms of a strive for cognitive coherence comes
in degrees. To take this into account we introduce a free
parameter βfor the strength of biased processing which
determines the extent to which congruent arguments are
favoured over incongruent ones. We use the logistic sigmoid
function
pβ(V(a0
i)) = 1
1 + eβV (a0
i)(4)
as a probabilistic model in which the probability to adopt or
reject a new argument depends on the evaluation V(a0
i)of
that argument in a non-linear way. For further convenience,
we shall differentiate the cases that a0
iis a pro- or a con-
argument and rewrite
p+
β(n+, n) = 1
1 + eβ(nn+)
p
β(n+, n) = 1
1 + eβ(n+n)(5)
which takes into account the linear relationship between
argument evaluation V(a0
i)and attitude o= (n+n).
--4--2 0 2 4
0.0
0.2
0.4
0.6
0.8
1.0
initial attitude
adoption probability (con)
no processing
bias (β = 0)
strong processing
bias (β = 10)
β = 0.2
β = 0.4
β = 0.8
β = 1.2
Figure 3. Probability to adopt a con-argument (p
β) by as a
function of the current attitude for different values of biased
processing strength β.
Figure 3shows the behaviour of this probabilistic choice
model for the case that an agent is confronted with a con-
argument (p
β(np, nc)). Unless β= 0, the probability of
adoption is higher than chance if the current opinion is
negative and smaller than 1/2 if it is positive. If βis large
(bold orange line), the adoption of incoherent arguments
becomes virtually zero and we approach the regime of the
rational optimiser. If, on the other hand, βis zero (bold blue
curve), there is no adoption bias and arguments are adopted
with a homogeneous probability of 1/2. This limiting case
therefore corresponds to the choice made in previous ACT
models.
4 Theoretical implications for the
balanced-argument treatment
In this section we take the perspective of an individual sub-
ject. Using the cognitive model of attitude-dependent biased
processing described in the previous section, we derive a
subject’s expected reactions after exposure to an unbiased
set of arguments. This allows a precise characterisation of
whether attitude polarisation or moderation is expected at
the individual level by exposure to conflicting arguments as
realised in the experiment (see Shamon et al. (2019) and
Section 2).
4.1 Expected attitude change after exposure
to an unbiased set of arguments
In the experiment (see Section 2), subjects are confronted
with an unbiased set of pro- and con-arguments. Attitudes
are measured before and after the treatment and the effect
on attitude change is analysed. In order to relate these
experimental findings to the microscopic assumptions about
Prepared using sagej.cls
6Journal Title XX(X)
argument exchange in the model, we ask: How would
artificial cognitive agents react to the same experimental
treatment and what is their expected attitude change? For this
purpose, we consider that the opinion structure is comprised
of four pro- and con-arguments respectively (see Figure
2). For further convenience, we shall denote this number
by M=N+=N= 4. Each pro-argument, if believed,
contributes with +1 to a positive attitude, each con-argument
with -1 to a negative stance and we have chosen this setup
to align the model with the experiment in the sense that
attitudes lie on a 9-point scale ranging from -4 to +4.
Let us assume that an agent receives an unbiased set of
four pro- and four con-arguments at once. Attitude change
may only take place if at least one a0
iis new to the agent
and if it is adopted. That is, it depends in two different ways
on the number of currently held pro- and con-arguments
(n+and nrespectively). First, with a certain probability
arguments are already shared by the agent (ai=a0
i) and do
not present new information. Second, as n+and ndefine
the current attitude of the agent by o=n+nthey are
relevant for the biased adoption probabilities p+
βand p
β
defined in (5). Namely, for an agent that already believes in
n+pro-arguments, the probability of adopting kadditional
pro-arguments is given by the binomial distribution
P rn+[∆n+=k]=(p+
β)k(1 p+
β)N+n+kN+n+
k
(6)
where the adoption probability p+
βdepends on n+and nas
given by Eq. (5). For con-arguments, we have equivalently
P rn[∆n=k]=(p
β)k(1 p
β)NnkNn
k.
(7)
Notice that an attitude change of kimplies that the difference
between adopted pro- and con-arguments is exactly k.
Consequently, the probability that an attitude change of kis
observed after exposure to all arguments is given by
P rn+,n[∆o=k] =
M
X
l=k
P rn+[∆n+=l]P rn[∆n=lk].
(8)
Eq. (8) completely characterizes the distribution of
attitude changes conditioned on the numbers of currently
held pro- and con-arguments. On its basis, the mean attitude
change for an agent with n+pro- and ncon-arguments can
be computed and is given by
E[∆o|n+, n] =
M
X
k=M
kP rn+,n[∆o=k].(9)
Notice that the mean expected attitude change E[∆o|n+, n]
depends on n+and nand is not equal for all configurations
(n+, n)that give rise to the same attitude oexcept for
the trivial case of β= 0. For instance, an opinion o= 0
may result from (n+, n) = (0,0) or (n+, n) = (4,4).
While the probability that the argument treatment presents
new previously rejected arguments to the agent is large in
the first case, it is zero in the latter. Since we do not in
general know whether an opinion ocame about by one
or another argument configuration we may assume that
all argument configurations are equally likely (maximum
entropy assumption). With this assumption, the expected
attitude change oconditioned on the initial attitude ocan
be written as
E[∆o|o] = 2 tanh βo
22o
M(10)
where Mis the number of pro- and con-arguments.
Eq. (10) characterizes how agents endowed with the
cognitive model described in Section 3would react on
average when exposed to an unbiased set of arguments.
The artificial treatment for which it has been derived
was designed to establish correspondence with the actual
treatment in the experiment. We will use this relation to
assess the strength of biased processing in the context
of energy production technologies in Section 5. However,
the model also provides more general insight into whether
attitude moderation or polarisation is expected after exposure
to balanced arguments and may hence provide a new
perspective on the mixed empirical evidence on that question
(Lord et al. 1979;Taber and Lodge 2006;Taber et al. 2009;
Druckman and Bolsen 2011;Corner et al. 2012;Teel et al.
2006;Shamon et al. 2019).
4.2 Attitude moderation versus polarisation
Fig. 4shows the behaviour of Eq. (10) for different values
of biased processing βas a function of the initial attitude
o. As described above, we have used the setting of four pro-
and four con-arguments so that M= 4. With β= 0 (no bias)
(10) reduces to the linear relationship Eβ(∆o|o) = o/2
which is shown by the blue line in Fig. 4. In this case, the
expected change for agents with an initially negative stance
(o < 0) is positive and agents with positive initial attitudes
tend to adopt a less positive opinion after the treatment.
Therefore, models with unbiased argument adoption would
predict a relatively strong moderation effect when agents
receive an unbiased set of arguments.
--4--2 0 2 4
--2
--1
0
1
2
initial attitude
expected attitude change
no processing
bias (β = 0) strong processing
bias (β = 10)
β = 0.2
β = 0.4
β = 0.8
β = 1.2
Figure 4. Expected attitude change after exposure to an
unbiased set of arguments (E[∆o|o]) as a function of the current
attitude for different values of biased processing strength β.
Consider, for instance, an agent with a negative attitude of -4 for which
the mean attitude change is +2. Such an agent believes in all the 4 con–
arguments but in none of the pro-arguments. When exposed to all 8
arguments, no more con-argument can be adopted but, on average, with
p= 1/2, one half of the pro-arguments will be adopted leading to an
increase of 2 in the attitude.
Prepared using sagej.cls
7
The other limiting case is marked by β→ ∞ which is
shown by the orange curves in Fig. 4(notice that such a
sharp bias is exemplified here by β= 10). As shown in Fig.
3, the adoption probability of attitude–challenging arguments
is virtually zero so that an agent with o=4will adopt none
of the pro-arguments in this case. Likewise, with a moderate
inclination towards one side of the attitude scale, a further
strengthening of this view is likely so that initial attitudes are
reinforced. That is, strong biased processing leads to attitude
polarisation.
This means that the puzzle of whether attitude polarisation
or moderation is likely after exposure to balanced arguments
becomes a question of how strong the processing bias is for
a given topic of interest. Eq. (10) predicts a relatively sharp
transition from attitude moderation to attitude polarisation
as the strength of biased processing βincreases beyond a
critical value β= 1/2. This is shown in the bifurcation
plot in Fig. 5in which the stable attitudes with E[∆o|o] =
0are shown as a function of β. For β < 1/2there is a
single fixed point at a neutral attitude of o= 0 indicating
that individuals tend to moderation. As βincreases the
system undergoes a bifurcation in which the neutral fixed
point becomes unstable and two stable fixed points at a
positive and a negative attitude value emerge. These two
fixed points quickly approach the extreme ends of the attitude
scale. That is, attitudes are attracted towards the extremes
after exposure to balanced arguments if biased processing
becomes larger than 1/2. This corresponds to the regime of
attitude polarisation.
0.0 0.2 0.4 0.6 0.8 1.0 1.2
--4
--2
0
2
4
strength of processing bias β
points of zero change (�ixed points)
attitude moderation attitude polarisation
– + – +
Figure 5. Transition from attitude moderation to attitude
polarisation as the strength of biased processing (β) increases.
This bifurcation plot shows the attitude values oat which the
map E[∆o|o] = 0 which indicates the no further attitude change
is expected for this opinion. The system undergoes a qualitative
change from a single stable opinion at o= 0 (moderation) to a
state where opinions at the extreme end of the attitude scale
become stable attracting points (polarisation).
5 Experimental calibration
5.1 Overall assessment
Eq. (10) can be viewed as a class of statistical models that
predict the expected attitude change after balanced argument
exposure given an initial opinion. They are based on the
basic assumption of ACT that argument assimilation drives
opinion change. Consequently, the free parameter βhas a
clear meaning in terms of cognitive mechanisms. Namely,
it governs to what extent congruent arguments are more
likely adopted compared to incongruent arguments. In this
setting we can ask: assuming that agents adapt by argument
exchange (as in ACT models), what is the processing
bias βthat matches best the experimental data on attitude
change? Notice again that previous applications of ACT
have not incorporated any bias corresponding to β= 0. Here
we show that biased argument adoption meets better with
experimental evidence.
In order to assess which bias βmatches best with available
experimental data, we compare the theoretical prediction
of the cognitive model with the experimentally observed
attitude changes by considering the mean squared error
between (10) and the data. Let us denote by (∆oi|oi)the
observed attitude change oiof subject iafter treatment
given his or her initial opinion oi. For an initial attitude oi, the
prediction of our model is given by Eβ(∆o|oi)as specified
in (10). The mean squared error over all observed values is
then given by
β=1
NS
NS
X
i=1
[(∆oi|oi)Eβ(∆o|oi)]2.(11)
In addition, we have used the toolbox for non-linear
estimation Stata 14 to find the optimal βvalues which is
based on the same error computation.
In order to identify the optimal βwe compute the MSE
for different values of βfrom zero to 1.2. While the former
corresponds to unbiased processing, the latter represents
a strong processing bias with a clear trend to attitude
polarisation (see Figure 5). For the results shown in Fig. 6
100 equidistant sample points in [0,1.2] are used.
00.2 0.40.6 0.811.2
biased processing (β)
)
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
mean square error
previous
ACT models
calibrated
model
Figure 6. Mean squared error between the argument adoption
model and the experimental data on attitude change as a
function of biased processing strength (β). The error analysis
has been performed on the entire data set (N= 1078). As
indicated by the red point, moderate biased processing
(β0.5) meets the data best.
On the whole, we have data on NS= 1078 subjects. The
blue curve in Fig. 6show the MSE βfor the entire data set
including all subjects. The MSE is relatively large for β=
0, significantly decreases until a minimum value at around
β0.5is reached and increases again if βbecomes larger.
Prepared using sagej.cls
8Journal Title XX(X)
This is a clear indication that the argument adoption process
refined with biased processing more appropriately captures
argument-induced opinion changes. A model with moderate
biased processing performs significantly better compared to
what current implementations of ACT would predict.
5.2 Differences across issues
Our theoretical considerations in Section 4have revealed
a transition from attitude moderation to polarisation when
the strength of biased processing crosses a critical value
β= 1/2. This suggests that one reason for the lack of
clear evidence for one of the two regimes might be due to
variations in the level of biased processing across different
topics addressed in the different experiments. Shamon et al.
(2019) provides data on six different energy-generating
technologies and we can repeat the MSE analysis for each
of them independently to gain some insight into these
variations. These results are presented in Fig. 7.
0 0.2 0.4 0.6 0.8 1 1.2
biased processing (β)
0.5
1
1.5
2
2.5
3
3.5
mean square error
gas biomass
coal onshore
oshore
solar all
technologies
Figure 7. Mean squared error between the argument adoption
model and the experimental data on attitude change as a
function of biased processing strength (β) for the single
technologies. The number of subjects in each technology
setting ranges from N= 170 (coal) to N= 197 (solar). While
the incorporation of biased processing improves the fit to
experimental data in all cases, there are also variations across
different energy-generating technologies. Biomass and gas, on
the one hand, indicate a level of biased processing clearly
below the critical point β= 1/2, the other four provide an
optimal fit at values slightly above β.
This comparison reveals, first of all, a similar qualitative
trend for all technologies with an optimal fit at non-zero β.
However, there are important differences when comparing
gas and biomass on the one hand, and coal, wind and solar
sources on the other. First, the processing bias at which
attitude change data is matched best is lower for the former.
Secondly, the error is generally larger for gas and biomass.
As a third observation we notice in Fig. 7that for increasing
βthe mean prediction error grows large for gas and biomass,
whereas the MSE remains low for strong processing biases
in the other cases. We can only speculate about the reasons
for these differences, but they hint at the fact that public
discussions on gas and biomass have only recently gained
momentum whereas discussions on coal versus wind and
solar power have a long history.
Notice that the analysis does not inform us about the
”best” model to explain the experimental data. We have
only identified the best model within the class of models
defined by Eβ. There might be estimators Eα,β,... of different
form that further reduce the MSE. On the other hand, the
considered model class has been derived from the theoretical
assumptions of ACT and has a clear interpretation in this
context. The fact that a specific value βof biased processing
strength can be found by comparing Eβto the data as well
as the well-behaved shape of the error curves that render that
value as a clearly defined minimum, indicate that a relevant
aspect of attitude change processes is captured by this model.
In that sense, the analysis demonstrates that if an argument
exchange model is used to analyse collective processes
of attitude formation, the microscopic argument adoption
process is better aligned with experimental data when a
moderate amount of biased processing is incorporated. We
can now turn to the collective-level implications of biased
processing.
6 Collective deliberation with biased
processing
Argument communication models describe processes of
collective attitude formation as repeated social exchange of
arguments. An artificial population of agents is generated
with an initial endowment of random argument strings.
These agents are connected in a social network from which
pairs of neighbouring agents are drawn at each time step. One
agent acts as a sender sand the second one as a receiver r.
The receiver incorporates an argument articulated by swith a
probability defined by the cognitive agent model. While this
probability is uniform and independent of the current attitude
in previous models it has been refined to incorporate biased
processing in this paper. If a new argument is adopted the
attitude of ris updated respectively. This process is repeated
over and over again until a stable state is reached in which
no further change is possible.
Previous work (M¨
as and Flache 2013) has shown that ACT
can explain collective bi-polarisation if individuals have a
strong tendency to interact with similar others (homophily).
In this section, we show that interaction homophily is not
necessary for collective bi-polarisation. Biased processing
alone can lead to persistent collective states in which one
group of agents strongly supports a proposition whereas
another group strongly opposes it.
6.1 Modelling collective argument exchange
In the model Nagents are generated with a random initial
assignment of arguments. As in the previous sections, we
use a setup with 4 pro and 4 con arguments (see Figure
2). Consequently, the initial opinion profile is described by
a binomial distribution on a 9 point attitude scale. In the
dynamical process the following steps are performed at each
single time step:
1. all agents are paired at random (N/2pairs) so that each
agent interacts exactly once at each time step (either as
sender or receiver),
2. for each pair, the sender sarticulates a random
argument to the receiver r,
Prepared using sagej.cls
9
3. the receiver adopts that argument with a probability
defined by pβ(Eqs. 35), and
4. all agents chosen as receiver in this round update their
opinion based on their new argument string.
After this is done for all pairs of agents, a new round starts
with another random pairing of the population.
 
 
            
             
Figure 8. Illustration of the interaction between a sender and a
receiver.
Figure 8illustrates the interaction between the sender and
the receiver entering the process with a specific argument set.
The sender is strongly in favour of an issue (e.g. an energy
technology) believing in all 4 pro-arguments and rejecting all
con-arguments (np= 4 and nc= 0). By random selection, s
argues for the rejection of one con-argument (a0
i= 0). The
receiver holding a weak positive attitude currently believes
in this argument (ar
i= 1). By (3), rcomes to a positive
evaluation of ss argument with V(a0
i)=1 because it fits
with the current attitude or= 1. Based on this evaluation,
Eq. (5) decides with which probability rwill adopt a0
i= 0.
If biased processing is strong, this probability pβis close to
one and without bias (β= 0) it is adopted with pβ= 1/2.
Depending on adoption, reither remains with the current
opinion or changes towards a slightly more positive view.
6.2 Model phenomenology
The model can give rise to a variety of collective phenomena.
In order to provide intuition about its dynamical behaviour
and to characterise the collective opinion processes that
follow from different processing biases, we first look at a
series of paradigmatic model realisations. Figure 9shows
four individual realisations of the model with increasing β
from top to bottom. The number of agents is set to N=
1078. It shows (red shaded) how the distribution of opinions
on the scale from -4 to 4 evolves due to repeated exchange
of arguments. Superimposed to this evolving distribution the
mean opinion and the standard deviation (opinion diversity)
is shown by the blue and red curves respectively. Notice that
the number of steps needed for convergence varies greatly
across these four cases. While 2000 iteration generally
suffice to reach the stable state for moderate values of β
(panel B and C), the time period is extended to 40000 in the
first and 20000 in the last example. The plots are augmented
with a characterisation of different dynamical phases of the
opinion process which is briefly described on the right hand
side of the figure.
Panel A shows the behaviour of the model in the
absence of biased processing (β= 0). In this scenario,
repeated argument exchange gives rise to a process of
diversity reduction (IIIa) by which all agents coordinate on
the same arguments and hence opinions. This process is
very slow. Almost 40000 iterations (N/2pair interactions
each) are needed to converge. As the argument adoption is
homogeneous (pβ= 1/2) independent of the attitude, this
model falls into the class of consensus models for which
convergence properties have been established in a seminal
paper by DeGroot (1974). In particular, the probability of
ending up with a specific opinion odepends on the number
of argument strings that are mapped to o, favouring moderate
over extreme final opinions.
Panel B shows the effects of weak biased processing on
the argument exchange process (β= 0.4). In this scenario,
the population quickly approaches one or the other extreme
on the attitude scale with all agent strongly in favour or
disfavour of the item at question. For symmetric initial
conditions the probability to end up in +4 or -4 is fifty-fifty.
Notice that compared to β= 0 convergence is extremely
quick taking less than 500 iterations. In the initial phase of
the process (I) we observe a tendency of increasing opinion
diversity due to biased processing. This is followed by a
phase of diversity reduction (IIIb) mimicking global choice
shift towards one side.
If biased processing becomes larger (panel C and D)
and crosses the critical value of β= 1/2(see Section
4.2) a different dynamical phase emerges in the first period
of the process. Initially, the social pool of arguments is
balanced and with strong processing bias agents are very
likely to adopt arguments that support their initial attitudinal
inclination and to reject arguments that challenge it. As
the analysis in Section 4.2 has revealed, there is a strong
tendency of attitude polarisation at the individual level. That
is, each single individual will strengthen its initial opinion
and approach one or the other extreme (I). Collectively,
this leads to a state where approximately one half of the
population approaches one extreme and the other half adopts
an opposing view on the other side of the opinion spectrum
(II). We refer to this state as bi-polarisation or collective
polarisation. Once the system entered such a state, a very
different process sets in which can be characterised as a
competition for majority between the two opposing opinion
groups. As shown in panel D (β= 1.2), this phase of
competition can be extremely persistent lasting more than
10000 iterations in this example. At a certain point, however,
due to rare random events by which individuals change side,
one camp gains majority and the overall argument pool
becomes biased into the respective direction. Agents with the
minority opinion are then more and more attracted due to this
prevalence of majority arguments.
The phenomenological view that has been provided in
this section aimed to convey basic intuition about the
collective processes that emerge when biased processing
is incorporated into argument communication models. We
have found that two remarkable transitions take place as the
strength of the bias increases. First, by the incorporation of
a small processing bias, moderate consensus is no longer
a stable outcome of the ACT models because the system
quickly approaches a consensus at the extremes of the
Prepared using sagej.cls
10 Journal Title XX(X)
5000 10000 15000
20000
25000 30000 35000
40000
4
3
2
1
0
-1
-2
-3
-4
attitude
200 400 600 800 1000 1200 1400 1600 1800 2000
4
3
2
1
0
-1
-2
-3
-4
attitude
200 400 600 800 1000 1200 1400 1600 1800 2000
4
3
2
1
0
-1
-2
-3
-4
attitude
5000 10000 15000 20000
time (total number of steps varies)
4
3
2
1
0
-1
-2
-3
-4
attitude
A
no biased processing (β = 0)
IIIa: long process of diversity re-
duction
IV: nal moderate consensus
B
weak biased processing (β = 0.4)
I: initial diversication
IIIb: one-sided diversity reduction
IV: extreme consensus
C
strong biased processing (β = 0.8)
I: initial diversication
II: bi-polarization
IIIc: resolution of one group
IV: extreme consensus
D
strong biased processing (β = 1.2)
I: initial diversication
II: bi-polarization
IIIc: resolution of one group
IV: extreme consensus
diversity
(standard deviation)
diversity
(standard deviation)
diversity
(standard deviation)
diversity
(standard deviation)
II IIIcI IV
II IIIcI IV
IIIbI IV
IIIaIV
Figure 9. Four paradigmatic model realisations for different levels of biased processing from β= 0 to β= 1.2. The figure shows
the time evolution of the opinion distribution of a population of N= 1078 agents (red shaded on a 9-point attitude scale). It also
shows a measure of diversity (standard deviation) and the mean opinion to characterise the distribution. Without biased processing
(A) the model leads to a long process in which the population converges to a moderate opinion. Weak biased processing (B,
β < β) leads to quick global convergence to one or the other side of the attitude scale (choice shift). As biased processing
increases (C and D, β > β) an intermediate regime of strong bi-polarisation emerges. This meta-stable state becomes persistent
for large β(see D).
attitude scale. From the perspective of a group that faces
a decision problem, weak biased processing hence enables
a rather efficient group decision process. Second, as biased
processing increases, the system may enter a persistent
collective regime of bi-polarisation with two groups of
agents one strongly in favour and another one strongly
against an issue (e.g. an energy technology). Strong biased
processing hence leads to a suboptimal, conflictual group
decision process. We will provide a more detailed analysis
of these two transitions in the following two sections.
6.3 First transition: Weak biased processing
leads to fast collective decisions
As shown in Fig. 9,β= 0 (no bias) leads to a very long
process in which the population is not clearly supporting
or opposing an issue. On the other hand, with β= 0.4,
convergence times speed up by orders of magnitude leading
to a very fast choice shift by all agents after which the group
clearly favours one side over the other. To better understand
this transition, we run a computational experiment with
focus on convergence times and the ”sidedness” of the final
group opinion varying the level of biased processing. In each
simulation we measure the time that the system needs to
converge to a stable state (consensus, phase IV in Fig. 9)
along with the absolute value of the respective consensual
opinion. Notice that the model is symmetric with respect
to the attitude scale and converges to either side with equal
probability. The processing bias is varied from zero to β=
0.4in steps of 1/60 0.0167 (25 points) and at each sample
point 100 simulations are performed.
Fig. 10 shows the mean convergence time and the
respective distribution over 100 runs on a logarithmic scale.
Prepared using sagej.cls
11
0 0.1 0.2 0.3 0.4
102
103
104
strength of processing bias β
time needed to converge
mean nal opinion strength
strength of processing bias β
0 0.1 0.2 0.3 0.4
1
1.5
2
2.5
3
3.5
4
Figure 10. Time to reach a stable consensus profile as a
function of β[0,0.4]. For each sample point a series of 100
simulation with N= 100 agents is performed. The mean value
as well as the minimum and maximum values are shown along
with the respective distribution of convergence times (light red).
Inset: the inset shows the mean absolute value of the final
group opinion. Under weak biased processing the final outcome
shifts to the extremes of the opinion spectrum.
Minimal and maximal values are shown by the thin lines.
While it takes on average 5000 steps to convergence for
β= 0 the mean number of iterations required to reach a
stable profile is below 500 for β0.25. Hence, weak biased
processing significantly accelerates the consensus process. In
the inset of Fig. 10 the mean absolute value of the final group
opinion is shown as a function of the processing bias. For
β > 0.1the probability of ending up in a state different from
-4 or +4 approaches zero, revealing a rather sharp transition
towards an ”extreme consensus”.
We conclude that the inclusion of biased processing
drastically affects the collective-level predictions of ACT
models. Even under very weak processing biases, moderate
consensus is no longer a stable outcome of the model. Instead
we observe quick convergence to one of the ends of the
opinion spectrum where the entire group is strongly in favour
or disfavour of the attitude object. Hence, while groups
without processing bias may remain in indecision for a long
time not clearly favouring one side over the other, even small
biases lead to a fast decision process with a clear outcome.
This has implications for previous theoretical work using
ACT (M¨
as et al. 2013;Feliciani et al. 2020) and points
towards an evolutionary function of biased processing at the
group level. We will discuss both points in the concluding
part of the paper.
6.4 Second transition: Strong biased
processing leads to persistent intra-group
conflict
The phenomenological analysis in Section 6.2 shows that
biased processing alone may lead to persistent collective
bi-polarisation. A particular composition of social groups
formed around opinions may foster its emergence but it is
not necessary. To our knowledge, this is the first model
that demonstrates this. As biased processing increases, the
collective behaviour of the model undergoes a second
transition from a regime of fast collective choice shift to
a regime where enduring collective disagreement becomes
likely. Considering the initial periods (I) in Figure 9suggests
that the emergence of the disagreement regime rests on
whether biased processing is strong enough to sustain
attitude polarisation at the individual level. That is, we expect
that collective opinion polarisation becomes possible as β
crosses the critical values of β= 1/2(cf. Figure 5). This
section solidifies this result by a systematic computational
experiment.
In order to systematically compare sets of model
realizations regarding their potential to create collective
polarisation, we have to identify if a model trajectory has
entered phase II in Figure 9. Many measures of opinion
polarisation have been conceived (see Bramson et al. (2016)
for an overview), and we define a conservative heuristic that
captures the most important aspects. We say that a system
configuration is in phase II if the proportion of agents with
extreme opinions on both sides of the opinion spectrum
is larger than the proportion of agents with an opinion
in between the two extremes. To be precise, we split the
opinion interval into three and count the number of agents
with opinion -3 and -4 (negative extreme), the number of
agents with opinion 3 and 4 (positive extreme), and the
number of agents with an opinion from -2 to 2 (moderate).
If both the number of extremely positive agents and the
number of extremely negative agents exceed the number
of moderates, we mark this configuration as bi-polarized.
Notice that this definition implies maximal opinion spread,
high dispersion and low kurtosis to name a few measures
used in the literature (DiMaggio et al. 1996;Bramson et al.
2016).
In the computational experiment we run a series of 100
realization of N= 100 agents for 100000 iterations and
compute for each realization the number of time steps
in phase II according the definition above. The model
parameter βranges from zero to 1.2 as before (Sections
4and 5) sampled with a step size of 0.05 (25 points).
For a given β, we assess (i.) the probability with which
collective polarisation emerges and (ii.) the number of time
steps the system remains in this state (persistence). Notice
again that one iteration means N/2interaction events in our
implementation. Both measures are shown in Figure 11.
The blue curve shows the relative number of model runs
which resulted in at least one temporal configuration that
satisfies our polarisation conditions. The red curves show
the respective number of time steps that a polarized state
persisted for all 100 model runs highlighting the mean
as well as the minimal and maximal value. Notice the
logarithmic scale on the right hand side of Figure 11. The
regime β < 1/2does not lead to transient states of stark
disagreement. The first instance is observed at a value
of β= 1/2. Hence, individual-level attitude polarisation is
necessary for collective polarisation in our model. However,
the effect is not persistent and present only for 5 time steps in
the respective model run. In between β= 1/2and β= 0.8
the probability of entering a state of bi-polarisation becomes
more and more likely and reaches one if β > 0.8. At the
Prepared using sagej.cls
12 Journal Title XX(X)
0 0.2 0.4 0.6 0.8 1 1.2
1
10
102
103
104
105
probability to enter bi-polarised state
time in bi-polarised states
strength of processing bias β
0 0.2 0.4 0.6 0.8 1 1.2
0
0.2
0.4
0.6
0.8
1
Figure 11. Probability that a system enters a state of collective
polarisation (blue) and the time it remains in such a state (red)
as a function of biased processing β. Results based on 100
model runs per parameter with N= 100. For β < 1/2the
system does not polarize. The first (singular) instance of
bi-polarisation is observed at the critical value β= 1/2. As
biased processing increases, the probability of a transient state
of polarisation increases and approaches 1 for β > 0.8. The
persistence (number of time steps) of polarisation is shown on a
logarithmic scale. Mean persistence as well as the respective
minimal and maximal number of steps are shown.
same time, these transient opinion profiles of bi-polarisation
become truly persistent. As shown by the distribution of
time steps in polarisation, persistence grows exponentially
reaching several hundreds of iterations for β= 0.8and
several thousands of steps if β > 1(compare also Figure 9).
The analysis underlines that persistent collective polarisation
becomes likely if biased processing is strong.
It is worth noting at this point that a period of persistent bi-
polarisation is likely to (i.) transform the social organisation
of groups around opinion (homophily), may (ii.) lead to
the emergence of symbolic leaders promoting group opinion
(group identity) and (iii.) antagonistic relations across
the groups (social polarisation). These processes are not
integrated into the model, but they would all favour further
persistence of collective polarisation once such a pattern has
emerged. Our model shows that biased processing alone may
be sufficient for the formation of camps that strongly support
competing opinions.
6.5 Influence of opinion homophily
One of the most prevailing assumptions in opinion dynamics
is that the interaction probability between two agents
depends on the similarity of their opinions (Axelrod 1997;
Hegselmann et al. 2002;Deffuant et al. 2000;Banisch
et al. 2010). In previous ACT models (M¨
as and Flache
2013;Feliciani et al. 2020;Banisch and Olbrich 2021)
this homophily principle is considered the main mechanism
responsible for collective bi-polarisation. As all previous
ACT studies draw on homophily, it is important to
understand the interplay of biased processing and homophily
within this theoretical framework.
There are different ways to integrate opinion homophily
into ACT models. While M¨
as and Flache (2013) propose
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0
0.2
0.4
0.6
0.8
1
strength of processing bias β
probability to enter bi-polarised state
strong homophily
(exchange when difference less than 4)
biased processing only
(strength governed by β)
moderate homophily
(exchange when difference less than 6)
weak homophily
(no exchange between extremes)
Figure 12. Probability that a system enters a state of collective
polarisation for different levels of biased processing and
different levels of homophily. Results are based on 100
realisations with N= 100 agents. The figure compares the
”base line” with biased processing only, and three model
variants with increasing homophily.
to operationalize it in terms of biased partner selection
assuming that attitudes of all other agents are known and
close partners selected with a higher probability, Banisch and
Olbrich (2021) follow the tradition of bounded confidence
models (Hegselmann et al. 2002;Deffuant et al. 2000) and
use a threshold on the opinion difference for a given pair of
agents. We adopt the latter approach here and assume that
argument exchange takes place only if the opinion distance
is below a certain threshold value.
To analyse the impact of homophily in the refined ACT
model a series of 100 simulations with N= 100 agents is
performed for different values of βranging from β= 0 to
β= 0.8. As we are mainly interested in how far opinion
homophily may foster the emergence of a bi-polarised group
situation, we consider the fraction of simulation runs in
which polarisation (phase II in Fig. 9, see previous section)
can be observed in the transient dynamics.
In Fig. 12 the results are shown for three different values of
the similarity threshold. In our model attitude lie on a nine-
point scale from -4 to +4. The weakest version of homophily
is that agents strongly supportive of a given option (+4) do
not enter into social exchange with agent strongly opposing
it (-4), and vice versa. That corresponds to a similarity
threshold of h= 8. Subsequently, we also show the results
for h= 6 and h= 4 as well as the ”base line” without
homophily (black).
The analysis shows that homophily makes polarisation the
most likely outcome of the collective process at significantly
lower levels of biased processing. Notably, group-level
polarisation can now emerge in the regime of individual-
level attitude moderation β < 1/2. Even in the rather weak
form where exchange between the extremes is cut off a
significant shift of the transition point to lower βbecomes
visible. Besides this shift, homophily also brings about a
second qualitative change in collective model behaviour: a
completely bi-polarised opinion profile now becomes stable
and phase II (Fig. 9) enduring. Once the system reaches a
state of collective polarisation with agents concentrated at
both extremes of the opinion scale, homophily, as integrated
Prepared using sagej.cls
13
here by a threshold function, makes further argument
exchange impossible. Hence, any profile in which agents
are either completely in favour (+4) or against (-4) is an
absorbing final state of the model dynamics.
From the perspective of previous ACT models with
β= 0, our results show that significantly lower homophily
may result in a group split if a small amount of biased
processing is introduced. While in our model a rather
restrictive threshold of h= 4 still leads to a moderate
consensus, a small deviation in terms of attitude-dependent
argument adoption makes bi-polarisation the most likely
outcome (dotted curve). This indicates that previous results
are sensitive to small variations in assumptions about biased
information processing. As the empirical part of this paper
demonstrates an increased micro-level validity of ACT when
biased processing is included (at β0.3for gas and biomass
and of β0.6for the remaining technologies), we have to
assess whether previous conclusions drawing on ACT still
hold in the presence of information processing biases.
7 Concluding remarks
We conclude this paper by a summary and a brief discussion
of its main contributions:
1. The paper presents a novel approach to combine
an experiment on argument persuasion with a
computational theory of collective deliberation. It
demonstrates that the theoretical framework of
argument communication theory (ACT) can not only
explain different dynamical phenomena in collective
deliberation (M¨
as and Flache 2013;M¨
as et al. 2013;
Feliciani et al. 2020;Banisch and Olbrich 2021),
but also provides a useful cognitive infrastructure
to computationally map real experimental treatment.
Starting from the theory, we develop a cognitively
grounded statistical devise to assess the extent
to which biased processing is involved in the
experimentally observed attitude changes induced by
conflicting but balanced arguments. We find that
biased processing is relevant and improves the micro-
level validity of argument-based models employed
in the theory. With this coherent account bridging
from experiments in Social Psychology to sociological
models of collective opinion processes our work
contributes to the major challenge of grounding social
influence models rigorously in experimental data
(cf. Flache et al. 2017;Lorenz et al. 2020), and
proves ACT a useful candidate for achieving such an
empirically more solid connection.
2. Following this program, we are able to clarify
the relation between biased processing and attitude
polarisation at the individual level which has remained
puzzling given the diverging empirical evidence
through different persuasion experiments (cf. Corner
et al. 2012;Shamon et al. 2019). Here we tackle
this question from the point of view of computational
agents employed in ACT and analyze how these
cognitive agents would change opinions in a virtual
experiment that matches closely to the real treatment.
The theoretical response function for the expected
attitude change derived from that contains the strength
of biased processing (β) as a free parameter. The
theoretical analysis of this model shows that biased
processing may lead to attitude moderation or attitude
polarisation if subjects are exposed to balanced
arguments. Whether one or the other effect is
observed depends crucially on the strength with which
individuals engage in biased processing. In fact, our
analysis reveals a sharp transition from moderation
to polarisation indicating that small, domain-specific
variations in the strength of biased processing may
result in qualitatively different patterns of attitude
change, both consistent with our theory. Our work
highlights that the question of whether biased
processing leads to attitude polarisation should not be
asked in absolute but in relative terms and provides
a theoretical explanation for why empirical evidence
across different domains is mixed.
3. Our empirical results concerning attitudes on energy
generating technologies show that the method
advanced in this paper can provide a more refined,
domain-specific understanding because it allows to
measure the extent to which subjects engage in biased
processing. On the entire data set, we find a clear
signature of moderate biased processing at the margin
of moderation and polarisation. The independent
analysis of the six groups that received arguments
with respect to six different technologies reveals
remarkable differences across topics. While the
processing bias is in the regime of attitude moderation
for gas and biomass, it is significantly higher and in
the regime of polarisation for coal, wind (onshore
and offshore) as well as solar power. One possible
explanation for this systematic differences is that
beliefs on gas and biomass are less settled compared
to the other four technologies and that beliefs
regarding the latter are more strongly organized into
coherent systems of beliefs (Converse 1964).
4. The identification of the processing bias βwhich
matches best given experimental data for a specific
attitude object is an efficient way to calibrate agent-
based models of argument communication on the basis
of balanced-argument experiments. The empirical
analysis in the context of debates on different energy
sources provides clear evidence that biased processing
plays an important role in argument-induced attitude
change and that its inclusion significantly improves
the micro-level validity of the mechanisms assumed
in current ACT models (β= 0). Given that different
topics may elicit different degrees of biased processing
(see previous point), the parameter βprovides a place
for adjustment of a computational model with respect
to opinions on a specific topic. Recent work has
shown that ACT can incorporate arguments brought
forward in real debates (Willaert et al. 2020), and
the experimental calibration regarding the argument
exchange mechanism is a further step towards
empirically-informed models of opinion dynamics.
5. The analysis of the collective-level implications of
our refined model shows that the incorporation
of biased processing has tremendous effects on
the predictions of ACT regarding the evolution of
Prepared using sagej.cls
14 Journal Title XX(X)
opinions within a group or a population. We observe
two transitions. First, and somewhat surprisingly,
weak biased processing accelerates group decision
processes by orders of magnitude. While a group
remains in a long period of indecision – not clearly
favoring one option over the other – in previous
models without bias, weak levels of biased processing
quickly lead to a state in which all members jointly
support one option. A second transition occurs if
biased processing increases. Under strong biased
processing the argument model leads to a persistent
conflictual state of subgroup polarisation.
6. Our study hence shows that biased processing alone
is sufficient for collective bi-polarisation. While the
original model by M¨
as and Flache (2013) has shown
that polarisation is possible under positive social
influence if homophily is strong enough, our work
shows that preferences for interaction with like-
minded others are not necessary either. With that
our work adds to the growing body of literature on
mechanisms that contribute to societal polarisation
(see Flache et al. 2017;Banisch and Olbrich 2019,
and references therein). Moreover, while empirical
plausibility of inter-personal mechanisms of negative
influence has been challenged (Tak´
acs et al. 2016),
there is ample empirical evidence for the intra-
personal mechanism of biased information processing
that is at the core of our model. The experiment
analyzed in this paper further provides convincing
empirical ground for the microscopic validity of this
mechanism.
Let us close this paper by highlighting two avenues for
future research. In the context of the climate change debate,
ample empirical evidence on biased information processing
has been gathered in recent years. The experiment on which
our analysis is relying (Shamon et al. 2019) addresses
the issue at the level of specific arguments providing a
specific but at the same time systematic picture of how
attitude extremity and direction impact biased processing.
Another type of empirical evidence comes from a series of
communication studies addressing the impact of selective
media exposure in the climate change debate (Feldman 2011;
Hart and Nisbet 2012;Nisbet et al. 2015;Stroud 2017;
Newman et al. 2018). While it is long known that ideological
affiliation is an important driver for media choice (Lazarsfeld
et al. 1944), a more refined picture of the interplay of
attitudes and media choices has been obtained within the
”reinforcing spirals framework” (Slater 2007;Feldman et al.
2014). This theory posits a reinforcing feedback between
selective media choice and biased information processing
which over time increases informational fragmentation and
opinion polarisation. In future work, we will integrate
selective exposure into our model to analyse the polarisation
potential of selective exposure in the presence of biased
argument processing. Moreover, the reinforcing spirals
model does not yet account for interpersonal influences
(Feldman et al. 2014, p. 606). An operationalization within
ACT overcomes this deficiency and provides a cognitive
foundation that may proof useful to further disentangle the
effects of biased processing, social influence and selective
media exposure.
Secondly, this work inspires new thought about potential
evolutionary origins of biased information processing.
Groups often face situations in which cohesive action is
needed and where choosing any out of a set of alternatives
is better than taking no action at all. We found that a certain
level of biased processing is very efficient from the group
perspective in this specific sense. For a value close to the
critical β= 1/2, the model predicts a very quick process
in which one alternative becomes jointly supported by the
entire group. Weaker biases slow down the group decision
process and the group may remain undecided for a long time.
Stronger biases, on the other hand, may lead to polarisation
and conflict. This points towards an evolutionary function of
biased processing and selective information processing more
generally: a specific level of bias may have evolved due to
the selective pressures on a groups ability to cohesively take
joint action.
All in all, this paper shows that biased processing increases
the micro–validity of ACT and has a strong impact on its
macro–level predictions. Future work has to clarify whether
previous conclusions drawing on the theory still hold after
our empirical refinement.
Acknowledgements
Thanks to Stefan Westermann for pointing at the bifurcation
analysis in Section 4. This project has received funding from
the European Unions Horizon 2020 research and innovation
programme under grant agreement No 732942 (Odycceus
– Opinion Dynamics and Cultural Conflict in European
Spaces).
References
Axelrod, R. (1997). The dissemination of culture: A model with
local convergence and global polarization. The Journal of
Conflict Resolution, 41(2):203–226.
Banisch, S., Araujo, T., and a, J. L. (2010). Opinion dynamics
and communication networks. Advances in Complex Systems,
13(1):95–111. ePrint: arxiv.org/abs/0904.2956.
Banisch, S. and Olbrich, E. (2018). An argument communication
model of polarization and ideological alignment. arXiv
preprint arXiv:1809.06134.
Banisch, S. and Olbrich, E. (2019). Opinion polarization by
learning from social feedback. The Journal of Mathematical
Sociology, 43(2):76–103. arXiv:1704.02890.
Banisch, S. and Olbrich, E. (2021). An argument communication
model of polarization and ideological alignment. Journal of
Artificial Societies and Social Simulation, 24(1):1.
Biek, M., Wood, W., and Chaiken, S. (1996). Working knowledge,
cognitive processing, and attitudes: On the determinants of
bias. Personality and Social Psychology Bulletin, 22(6):547–
556.
Bramson, A., Grim, P., Singer, D. J., Fisher, S., Berger, W.,
Sack, G., and Flocken, C. (2016). Disambiguation of
social polarization concepts and measures. The Journal of
Mathematical Sociology, 40(2):80–111.
Converse, P. E. (1964). The nature of belief systems in mass publics.
Critical Review, 18(1-3):1–74.
Prepared using sagej.cls
15
Corner, A., Whitmarsh, L., and Xenias, D. (2012). Uncertainty,
scepticism and attitudes towards climate change: biased
assimilation and attitude polarisation. Climatic change,
114(3):463–478.
Dandekar, P., Goel, A., and Lee, D. T. (2013). Biased assimilation,
homophily, and the dynamics of polarization. Proceedings of
the National Academy of Sciences, 110(15):5791–5796.
Deffuant, G., Neau, D., Amblard, F., and Weisbuch, G. (2000).
Mixing beliefs among interacting agents. Advances in Complex
Systems, 3(01n04):87–98.
DeGroot, M. H. (1974). Reaching a consensus. Journal of the
American Statistical Association, 69(345):118–121.
DiMaggio, P., Evans, J., and Bryson, B. (1996). Have american’s
social attitudes become more polarized? American Journal of
Sociology, 102(3):690–755.
Druckman, J. N. and Bolsen, T. (2011). Framing, motivated
reasoning, and opinions about emergent technologies. Journal
of Communication, 61(4):659–688.
Eagly, A. H. and Chaiken, S. (1993). The psychology of attitudes.
Harcourt brace Jovanovich college publishers.
Feldman, L. (2011). The opinion factor: The effects of opinionated
news on information processing and attitude change. Political
Communication, 28(2):163–181.
Feldman, L., Myers, T. A., Hmielowski, J. D., and Leiserowitz, A.
(2014). The mutual reinforcement of media selectivity and
effects: Testing the reinforcing spirals framework in the context
of global warming. Journal of Communication, 64(4):590–611.
Feliciani, T., Flache, A., and M¨
as, M. (2020). Persuasion without
polarization? modelling persuasive argument communication
in teams with strong faultlines. Computational and
Mathematical Organization Theory, pages 1–32.
Festinger, L. (1957). A theory of cognitive dissonance, volume 2.
Stanford university press.
Flache, A., M¨
as, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G.,
Huet, S., and Lorenz, J. (2017). Models of social influence:
Towards the next frontiers. Journal of Artificial Societies and
Social Simulation, 20(4):2.
Friedkin, N. E. (1999). Choice shift and group polarization.
American Sociological Review, pages 856–875.
Friedkin, N. E. and Johnsen, E. C. (2011). Social influence network
theory: A sociological examination of small group dynamics,
volume 33. Cambridge University Press.
Gaisbauer, F., Olbrich, E., and Banisch, S. (2020). Dynamics of
opinion expression. Physical Review E, 102(4):042303.
Hart, P. S. and Nisbet, E. C. (2012). Boomerang effects in science
communication: How motivated reasoning and identity cues
amplify opinion polarization about climate mitigation policies.
Communication research, 39(6):701–723.
Hegselmann, R., Krause, U., et al. (2002). Opinion dynamics and
bounded confidence models, analysis, and simulation. Journal
of Artificial Societies and Social Simulation, 5(3).
Kobayashi, K. (2016). Relational processing of conflicting
arguments: Effects on biased assimilation. Comprehensive
Psychology, 5:2165222816657801.
Kunda, Z. (1990). The case for motivated reasoning. Psychological
bulletin, 108(3):480.
Lazarsfeld, P. and Merton, R. K. (1954). Friendship as a social
process: A substantive and methodological analysis. In Berger,
M., Abel, T., and Page, C. H., editors, Freedom and Control in
Modern Society, pages 18–66. New York: Van Nostrand.
Lazarsfeld, P. F., Berelson, B., and Gaudet, H. (1944). The people’s
choice. how the voter makes up his mind in a presidential
campaign.
Liu, C.-H., Lee, H.-W., Huang, P.-S., Chen, H.-C., and Sommers, S.
(2016). Do incompatible arguments cause extensive processing
in the evaluation of arguments? the role of congruence between
argument compatibility and argument quality. British Journal
of Psychology, 107(1):179–198.
Lord, C. G., Ross, L., and Lepper, M. R. (1979). Biased assimilation
and attitude polarization: The effects of prior theories on
subsequently considered evidence. Journal of personality and
social psychology, 37(11):2098.
Lorenz, J., Neumann, M., and Schr¨
oder, T. (2020). Individual
attitude change and societal dynamics: Computational exper-
iments with psychological theories.
M¨
as, M. and Flache, A. (2013). Differentiation without
distancing. explaining bi-polarization of opinions without
negative influence. PloS one, 8(11):e74516.
M¨
as, M., Flache, A., Tak´
acs, K., and Jehn, K. A. (2013).
In the short term we divide, in the long term we unite:
Demographic crisscrossing and the effects of faultlines on
subgroup polarization. Organization science, 24(3):716–736.
McHoskey, J. W. (1995). Case closed? on the john f. kennedy
assassination: Biased assimilation of evidence and attitude
polarization. Basic and Applied Social Psychology, 17(3):395–
409.
McPherson, M., Smith-Lovin, L., and Cook, J. M. (2001). Birds
of a feather: Homophily in social networks. Annual Review of
Sociology, 27:415–444.
Newman, T. P., Nisbet, E. C., and Nisbet, M. C. (2018). Climate
change, cultural cognition, and media effects: worldviews drive
news selectivity, biased processing, and polarized attitudes.
Public Understanding of Science, 27(8):985–1002.
Nisbet, E. C., Cooper, K. E., and Garrett, R. K. (2015). The partisan
brain: How dissonant science messages lead conservatives and
liberals to (dis) trust science. The ANNALS of the American
Academy of Political and Social Science, 658(1):36–66.
Petty, R. E. and Cacioppo, J. T. (1986). The elaboration likelihood
model of persuasion. In Communication and persuasion, pages
1–24. Springer.
Shamon, H., Schumann, D., Fischer, W., V¨
ogele, S., Heinrichs,
H. U., and Kuckshinrichs, W. (2019). Changing attitudes and
conflicting arguments: Reviewing stakeholder communication
on electricity technologies in germany. Energy Research &
Social Science, 55:106–121.
Slater, M. D. (2007). Reinforcing spirals: The mutual influence
of media selectivity and media effects and their impact on
individual behavior and social identity. Communication theory,
17(3):281–303.
Stroud, N. J. (2017). Understanding and overcoming selective
exposure and judgment when communicating about science.
Oxford University Press New York, NY.
Sunstein, C. R. (2002). The law of group polarization. Journal of
political philosophy, 10(2):175–195.
Taber, C. S., Cann, D., and Kucsova, S. (2009). The motivated
processing of political arguments. Political Behavior,
31(2):137–155.
Prepared using sagej.cls
16 Journal Title XX(X)
Taber, C. S. and Lodge, M. (2006). Motivated skepticism in the
evaluation of political beliefs. American journal of political
science, 50(3):755–769.
Tak´
acs, K., Flache, A., and M¨
as, M. (2016). Discrepancy and
disliking do not induce negative opinion shifts. PloS one,
11(6):e0157948.
Teel, T. L., Bright, A. D., Manfredo, M. J., and Brooks, J. J.
(2006). Evidence of biased processing of natural resource-
related information: a study of attitudes toward drilling for
oil in the arctic national wildlife refuge. Society and Natural
Resources, 19(5):447–463.
Thagard, P. and Verbeurgt, K. (1998). Coherence as constraint
satisfaction. Cognitive Science, 22(1):1–24.
Willaert, T., Banisch, S., Van Eecke, P., and Beuls, K. (2020).
Tracking causal relations in the news. reflections on machine-
guided opinion mining. arXiv preprint arXiv:1912.01252.
Wood, W., Rhodes, N., and Biek, M. (1995). Working knowledge
and attitude strength: An information-processing analysis.
Attitude strength: Antecedents and consequences, 4:189–202.
Prepared using sagej.cls
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This multi-level model of opinion formation considers that attitudes on different issues are usually not independent. In the model, agents exchange beliefs regarding a series of facts. A cognitive structure of evaluative associations links different (partially overlapping) sets of facts on different political issues and determines agents’ attitudinal positions in a way borrowed from expectancy value theory. If agents preferentially interact with other agents who hold similar attitudes on one or several issues, this leads to biased argument pools and increasing polarization in the sense that groups of agents selectively believe in distinct subsets of facts. Besides the emergence of a bi-modal distribution of opinions on single issues as most previous opinion polarization models address, our model also accounts for the alignment of attitudes across several issues along ideological dimensions.
Article
Full-text available
Strong demographic faultlines are a potential source of conflict in teams. To study conditions under which faultlines can result in between-group bi-polarization of opinions, a computational model of persuasive argument communication has been proposed. We identify two hitherto overlooked degrees of freedom in how researchers formalized the theory. First, are arguments agents communicate influencing each other’s opinions explicitly or implicitly represented in the model? Second, does similarity between agents increase chances of interaction or the persuasiveness of others’ arguments? Here we examine these degrees of freedom in order to assess their effect on the model’s predictions. We find that both degrees of freedom matter: in a team with strong demographic faultline, the model predicts more between-group bi-polarization when (1) arguments are represented explicitly, and (2) when homophily is modelled such that the interaction between similar agents are more likely (instead of more persuasive).
Article
Full-text available
In Germany, the public is exposed to pro and counter arguments regarding different electricity generation technologies. To assess the attitudinal consequences of these arguments, we presented a balanced set of seven pro and seven counter arguments concerning one of six electricity-generating technologies (i.e., coal power stations, gas power stations, onshore wind power stations, offshore wind power stations, open space photovoltaics, or biomass power plants) to respondents with heterogeneous socio-demographic characteristics. We asked them to rate the strength of each argument and report their perceived familiarity with each argument. Based on the respondents’ answers, we examined the tendencies that underlie the process of evaluating arguments using different theoretical approaches. We found that persuasiveness ratings are driven by arguments’ compatibility with respondents’ initial attitudes, arguments’ quality (i.e., strong, moderate, or weak), and respondents’ perceived familiarity with the arguments. Furthermore, we determined the extent to which respondents’ initial attitudes towards an electricity-generating technology, measured immediately before evaluation of 14 conflicting arguments, changed after exposure to the arguments. Unlike former studies on attitude polarization, we examined conditional probabilities instead of the absolute level of global attitude change or the marginal probabilities of attitude change and persistence. This allowed for more nuanced (re)examination of the issue and showed, among other results, that attitude polarization is the exception rather than the rule.
Article
Full-text available
According to cultural cognition theory, individuals hold opinions about politically contested issues like climate change that are consistent with their “cultural way of life,” conforming their opinions to how they think society should be organized and to what they perceive are the attitudes of their cultural peers. Yet despite dozens of cultural cognition studies, none have directly examined the role of the news media in facilitating these differential interpretations. To address this gap, drawing on a national survey of US adults administered in 2015, we statistically modeled the cultural cognition process in relation to news choices and media effects on public attitudes about climate change. Individuals possessing strongly held cultural worldviews, our findings show, not only choose news outlets where they expect to find culturally congruent arguments about climate change, but they also selectively process the arguments they encounter. Overall, our study demonstrates the substantial role that cultural cognition in combination with news media choices play in contributing to opinion polarization on climate change and other politicized science topics.
Article
Full-text available
In 1997, Robert Axelrod wondered in a highly influential paper “If people tend to become more alike in their beliefs, attitudes, and behavior when they interact, why do not all such differences eventually disappear?” Axelrod’s question highlighted an ongoing quest for formal theoretical answers joined by researchers from a wide range of disciplines. Numerous models have been developed to understand why and under what conditions diversity in beliefs, attitudes and behavior can co-exist with the fact that very o en in interactions, social influence reduces differences between people. Reviewing three prominent approaches, we discuss the theoretical ingredients that researchers added to classic models of social influence as well as their implications. Then, we propose two main frontiers for future research. First, there is urgent need for more theoretical work comparing, relating and integrating alternative models. Second, the field suffers from a strong imbalance between a proliferation of theoretical studies and a dearth of empirical work. More empirical work is needed testing and underpinning micro-level assumptions about social influence as well as macro-level predictions. In conclusion, we discuss major roadblocks that need to be overcome to achieve progress on each frontier. We also propose that a new generation of empirically-based computational social influence models can make unique contributions for understanding key societal challenges, like the possible effects of social media on societal polarization.
Article
Full-text available
We present a model of opinion dynamics in which agents adjust continuous opinions as a result of random binary encounters whenever their difference in opinion is below a given threshold. High thresholds yield convergence of opinions towards an average opinion, whereas low thresholds result in several opinion clusters: members of the ame cluster share the same opinion but are no longer influenced by members of other clusters.
Article
Full-text available
When exposed to conflicting arguments, people tend to evaluate the attitude-congruent arguments favorably and the attitude-incongruent arguments unfavorably. This phenomenon is called biased assimilation. Prior research has shown that the biased assimilation of conflicting arguments is robust. Yet, relatively little is known about how conflicting arguments are processed, thereby increasing or decreasing the influence of preexisting attitudes on the argument evaluation. The present study examined relational (vs. separate) processing of conflicting arguments—that is, strategically connecting the arguments with each other—and its effects on biased assimilation. In two online studies, Japanese adults (Study 1: N = 406, Study 2: N = 447) received and evaluated two conflicting arguments concerning the introduction of daylight savings in one of two presentation modes: sequential (one at a time) or simultaneous (at one time). Although, in Studies 1 and 2, presentation mode did not influence relational processing or biased assimilation, quite a few participants reported that they engaged to varying degrees in relational processing while evaluating the arguments. Additionally, results of Study 2 indicated that participants' self-reported relational processing had moderating effects on biased assimilation. The present findings have implications for the further elucidation of the cognitive processes of biased assimilation and the development of debiasing techniques.
Article
We present an agent-based model for studying the societal implications of attitude change theories. Various psychological theories of persuasive communication at the individual level are implemented as simulation experiments. The model allows us to investigate the effects of contagion and assimilation, motivated cognition, polarity, source credibility, and idiosyncratic attitude formation. Simulations show that different theories produce different characteristic macrolevel patterns. Contagion and assimilation are central mechanisms for generating consensus, however, contagion generates a radicalized consensus. Motivated cognition causes societal polarization or the fragmentation of attitudes. Polarity and source credibility have comparatively little effect on the societal distribution of attitudes. We discuss how the simulations provide a bridge between microlevel psychological theories and the aggregated macrolevel studied by sociology. This approach enables new types of evidence for evaluating psychological theory to complement experimental approaches, thus answering calls to enhance the role of coherent and formalized theory in psychological science. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Chapter
Before turning to an evaluation of strategies to overcome selective exposure and judgment, this chapter demonstrates that these selectivity processes occur with respect to science but not in all circumstances. Strategies to curb the occurrence of selectivity are discussed based on the information conveyed and the motivational state of a person encountering scientific information. Theories and research on accountability to others, anxiety, self- affirmation, defensive confidence, and normative information are discussed as ways to reduce selectivity. No one strategy emerges as a cure- all, prompting the presentation of a future research agenda to examine strategies to overcome selective exposure and judgment.