Available via license: CC BY 4.0
Content may be subject to copyright.
Cognitive contagion: How to model (and potentially counter)
the spread of fake news
Nicholas Rabb1,*, Lenore Cowen1, Jan P. de Ruiter1,2, Matthias Scheutz1,
1
Dept. of Computer Science, Tufts University, Medford, MA, United States of America
2Dept. of Psychology, Tufts University, Medford, MA, United States of America
* Corresponding author: nicholas.rabb@tufts.edu
Abstract
Understanding the spread of false or dangerous beliefs – so-called mis/disinformation –
through a population has never seemed so urgent to many. Network science researchers
have often taken a page from epidemiologists, and modeled the spread of false beliefs as
similar to how a disease spreads through a social network. However, absent from those
disease-inspired models is an internal model of an individual’s set of current beliefs,
where cognitive science has increasingly documented how the interaction between
mental models and incoming messages seems to be crucially important for their
adoption or rejection. We introduce a cognitive contagion model that combines a
network science approach with an internal cognitive model of the individual agents,
affecting what they believe, and what they pass on. We show that the model, even with
a very discrete and simplistic belief function to capture cognitive dissonance, both adds
expressive power over existing disease-based contagion models, and qualitatively
demonstrates the appropriate belief update phenomena at the individual level.
Moreover, we situate our cognitive contagion model in a larger public opinion diffusion
(POD) model, which attempts to capture the role of institutions or media sources in
belief diffusion – something that is often left out. We conduct an analysis of the POD
model with our simple cognitive dissonance-sensitive update function across various
graph topologies and institutional messaging patterns. We demonstrate that
population-level aggregate outcomes of the model qualitatively match what has been
reported in COVID misinformation public opinion polls. The overall model sets up a
preliminary framework with which social science misinformation researchers and
computational opinion diffusion modelers can join forces to understand, and hopefully
learn how to best counter, the spread of misinformation and “alternative facts.”
Introduction
Understanding the spread of false or dangerous beliefs through a population has never
seemed so urgent to many. In our modern, highly networked world, societies have been
grappling with the widespread belief in conspiracies [1–5], increased political
polarization [6–9], and distrust in scientific findings [10, 11]. Most prominent in our
times are conspiracies surrounding COVID-19 and starkly partisan distributions of
beliefs regarding scientifically-motivated safety measures.
Throughout the course of the pandemic, much effort has been spent trying to
understand why, in the face of a global pandemic, so many believed it was a hoax,
targeted political attack, result of 5G cell towers, or just that it was not dangerous and
July 8, 2021 1/28
arXiv:2107.02828v1 [cs.SI] 6 Jul 2021
did not justify wearing a protective mask [1,3
–
5,10, 12
–
14]. Understanding the spread of
misinformation requires a way of modeling and understanding both how this
misinformation spreads in a population, and also why some individuals are more or less
vulnerable.
This paper introduces a new class of simple models for the spread of misinformation
that includes components from both social network theory and cognitive theory. While
social network theory and cognitive theory have both sought to contribute to the
understanding of the mechanisms that govern individuals’ acquisition and updates of
beliefs, each has traditionally focused on different pieces of this puzzle. Social network
theory has provided interesting insights by applying techniques originally developed to
model the spread of disease to modeling the spread of misinformation [2, 15–18].
Modern psychological and cognitive theory has focused on how the relationship of
suggested beliefs relates to an individual’s current set of beliefs, and how this influences
an individual’s likelihood of updating their beliefs in the face of confirmatory or
contradictory information [3,13, 14,19
–
22]. We introduce a new class of models, cognitive
contagion models, that adopt a network-based contagion model from social network
theory [23, 24], but include a more individually differentiated model of belief update and
adoption that is informed by cognitive theory. We show that even very simple versions
of cognitive contagion result in interesting network dynamics that seem to well-represent
some of the real-world phenomena that were seen in pandemic misinformation.
An additional contribution of this work is the inclusion of news or media sources
explicitly as a different type of network node from the individuals in the network. It is
suspected that information coming from media sources plays an out-sized role in the
spread of misinformation [1, 11, 25] – including how difficult false beliefs, once adopted,
are to extricate – and their inclusion suggests some simple strategies by which
misinformation might be countered.
These beliefs, once adopted, are so difficult to extricate that over the course of 2020,
the proportion of those who did not believe in the virus hardly changed [26].
Anecdotally, reports circulated of nurses in states with few regulations like South
Dakota, describing patients who would be dying of COVID and refusing to believe they
had it. One nurse was quoted saying, “They tell you there must be another reason they
are sick. They call you names and ask why you have to wear all that ‘stuff’ because
they don’t have COVID and it’s not real” [27].
Modern psychological and cognitive theory have made a substantive contribution by
making a clear distinction between beliefs that update in accordance to the evidence
one receives, and others which persist despite clear, logical contrary evidence [21,28, 29].
This is the distinction that appears key to understanding mechanisms governing belief
in misinformation. In fact, there is evidence that those who engage in manufacturing
misinformation exploit this research to make their messages more potent. Wiley [30] has
recently revealed that some polarized, partisan beliefs and conspiracies have been
designed to persist despite contrary evidence.
While the study of individual beliefs has recently been advancing, attempting to
determine how beliefs, true or otherwise, spread through an entire population adds yet
another layer of complexity. Sociologists and political theorists have long studied public
opinion: the theoretical mechanisms by which populations come to certain beliefs, which
are notoriously difficult to verify empirically [31, 32]. However, with the advent of
social network research, scholars are moving toward that goal, studying how information
cascades through groups [2,23, 33
–
35], and the wide-scale adoption of certain beliefs [36].
Our work seeks to take one further step: understanding how an integrated model
combining individual cognitive belief models with social network dynamics might be
employed to study how public opinion shifts.
In the social network theory realm, our work generalizes a class of Agent-Based
July 8, 2021 2/28
Models (ABMs), called Agent Based Social System (ABSS) [37] – more specifically,
social contagion models [24]. ABM is a powerful modeling paradigm that has both
successes and future potential in a variety of areas, including animal behavior [38–42],
social sciences [43], and, notably, opinion dynamics [24,44–46].
The applications of ABMs, even within the field of opinion dynamics, are very
diverse [47], but all ABMs must balance complexity of the agent and environment
models, and strength of results [48]: A model with simple rules, but intriguing emergent
results often raises more interesting implications than a model that strives to match the
complexity of reality [49] (for a strong example of this principle, see Schelling’s model of
segregation [43]). Amidst a vast literature of what Zhang & Vorobeychik [47] – in a
review of innovation diffusion ABMs – call “cognitive agent models,” social contagion
models appear to be the simplest models of opinion dynamics, and have subsequently
sparked robust theoretical discussion in the social sciences [23, 24, 50].
By combining the simplicity of the social contagion paradigm with some of the
cognitive literature regarding misinformation, we propose our first major contribution: a
cognitive contagion model that captures the spread of identity-related beliefs. This
model is then tuned to capture, on the individual agent-level, two major effects cited in
misinformation literature: dissonance [19], and exposure [10, 51], capturing what we call
defensive cognitive contagion (DCC).
We conduct an analysis of the cognitive contagion model within our second
contribution, what we call a public opinion diffusion (POD) model. We situate agents
following cognitive contagion rules in a model that includes institutional agents,
addressing the crucial ontological fallacy that only individuals play a part in these
systems [52]. Media companies have played crucial roles in COVID
misinformation [1, 11], and must be captured in our model.
Through a simple cognitive function defined at the level of individual agents, our
cognitive contagion model is more powerful than simple or complex contagion.
Moreover, the motivations behind the DCC function demonstrate that simple and
complex contagion cannot capture identity-related belief spread between individuals.
Given findings from COVID misinformation studies, results from our POD model with
DCC appear to align with population-level results reported by opinion polls – namely
that beliefs about the virus remained starkly partisan [6–9], and hardly changed
throughout 2020 [26]. These preliminary results hint at possible interventions and offer
plenty of opportunities for future studies to be conducted using these methods.
Background
Simple Contagion
ABMs have been widely used to model social contagion effects – those which describe
the process of ideas or beliefs spreading through a population [23]. Such models attempt
to explain possible processes underlying the spread of innovations [47,53, 54], or
unpopular norms [45]. These models are extensions of earlier work in sociology that
theorized how social network structure and simple decisions, such as the threshold
effect [50], may affect group-level behavior. Through more abundantly available
computational power, these ideas can now be simulated and their implications can be
analyzed.
There are two popular types of social contagion models used in ABMs: simple and
complex contagion. Both model the spread of behaviors, norms, or ideas through a
population. For simplicity, we will refer to behaviors or norms as “beliefs” going
forward, as it is plausible to argue that both are generated by beliefs that an individual
holds, explicit or otherwise. The simple contagion model assumes that behaviors or
July 8, 2021 3/28
norms can spread in a manner akin to a disease [23, 33, 34, 44]. Simply being connected
to an individual who holds a belief engenders a probability, p, that the belief may
spread to you. This can even be true given different belief strengths or polarities for the
same proposition. More formally, given two nodes in the model uand v, with each
having respective beliefs
bu
and
bv
, when node
u
is playing out its decision process, the
probability of adopting belief bvcan be modeled as:
P(bu=bv|bu) = p. (1)
It is important to note that in belief contagion models, the probabilities assigned to
adopting a new belief given a prior one are over an event set of only two outcomes:
adopt the new belief, or keep the prior one. Thus, if there are several possible beliefs to
adopt from a set B, the sums of probabilities of adopting some bvgiven a prior of bu,
for all values in
B
, do not necessarily add to 1. Rather, because each agent interaction
is only between two belief values, even if they both come from a larger set, the
interaction is represented as a Bernoulli process. Each instance below in which we
motivate probabilities of adopting a belief given a prior is adherent to the same logic.
This is graphically depicted in Fig. 1.
Fig 1. An illustration of simple contagion. Given an agents uand v, contagion of
bv= 0 to uhappens with probability p= 0.15. The probability of not changing beliefs
becomes 0.85.
This simple equation can also be used to show how many infected neighbors are
necessary to nearly guarantee any agent being infected. Suppose an agent uwith
neighbors N(u) and a simple contagion probability of p. With each infected neighbor
Ni(u), u’s chance of infection increases to 1 −p|Ni(u)|. If the model assumes some
“high-likelihood” threshold for infection,
δ
= 0
.
95, as a near guarantee of infection, some
|Ni
(
u
)
|
will raise the infection chance such that (1
−p
)
|Ni(u)|≥δ
. The requisite number
of neighbors becomes:
|Ni(u)|≥dlog(1 −δ)
log(1 −p)e(2)
For example, if
p
= 0
.
5 and
δ
= 0
.
95, then
|Ni
(
u
)
| ≥ d
4
.
32
e ≥
5 –
u
would need at least
five infected neighbors to almost guarantee infection.
Of course, since beliefs are not actually transmitted through airborne pathogens that
incite infected individuals to believe something, there are abundant sociological
hypotheses as to why this phenomenon may appear infectious [23]. There are other
contagion models that have put forward alternative explanations.
July 8, 2021 4/28
Complex Contagion
Complex contagion rather imagines that the spread of beliefs is predominantly governed
by a ratio of consensus of those whom any agent is connected to [24]. In perhaps its
simplest variant, given an agent to focus on (ego) they may believe something if a
certain proportion of their neighbors in the network (alters) believe it. One of the most
famous examples of this type of model is in Schelling’s segregation model [43]. Formally,
given an ego uand set of neighbors N(u), the probability of adopting belief bcan be
represented as:
P(bu=b|bu) =
1,1
|N(u)|P
v∈N(u)
d(v, b)≥α,
0, otherwise
,(3)
where
d
(
v, b
) is a simple indicator function which returns 1 if
bv
=
b
– i.e. if neighbor
v
believes
b
– and where
α∈
[0
,
1] is a threshold indicating the ratio of believing neighbors
necessary for
u
to adopt the belief. In this model, the agent
u
is guaranteed to adopt
b
if a sufficient ratio of its neighbors believe b. We depict this graphically in Fig. 2.
Fig 2.
An illustration of complex contagion. Given an agent
u
, contagion would require
α= 0.4 of u’s neighbors to have some belief bv. Since 2/5 of u’s neighbors have belief
bv
= 0,
u
adopts
bu
= 0 with probability 1. The chance of adopting any other neighbor
beliefs is 0.
It may seem tempting to imagine, given the simple contagion variation in Eq. 2, that
the behavior of complex contagion can also be modeled with the “sufficient number of
neighbors” idea. However, that notion does not capture the ratio effect that complex
contagion models. Complex contagion says nothing about the innate infectiousness of
any belief, but rather the infectiousness of the connections surrounding any agent.
The focus on a “portion” of believing neighbors being required to propagate a belief
spawned new questions and investigations. This type of belief contagion has been
argued to explain why some norms may spread despite them being disagreed with on an
individual level, such as collegiate drinking behavior [45]. It elegantly captures
phenomena associated with group dynamics such as peer pressure. It also can be used
to model diffusion of health information or technological innovations [55].
Both these contagion models can be generalized to allow heterogeneous sets of agents
whose update rules are different for agents of different types (for example, more or less
susceptible to infection). However, while these two popular types of contagion can
effectively model some classes of belief contagion, others cannot be captured by their
mechanisms – even with heterogeneous agents. Notably, models where there is an
July 8, 2021 5/28
internal model of what a given individual agent already believes that dynamically affects
what beliefs they spread and adopt cannot be described by either simple or complex
contagion. We now introduce a more powerful model, cognitive contagion, that allows
the modeling and updating of individualized internal belief states.
Cognitive Contagion Model
The goal of this work is to show that even the simplest cognitive contagion model adds
expressive power and leads to interesting dynamics of belief propagation that cannot
arise in the simple or complex contagion models. In our simple example, we consider
belief in a single proposition B(for example, Bcould be, “COVID is a hoax,” or,
“mask-wearing does not help protect against spreading or contracting COVID”). In
cognitive contagion models, as distinct from simple or complex contagion models, u’s
probability of believing a message from vis influenced by an internal model of u’s
beliefs. For our simple cognitive contagion model, we model this initially as a single
internal variable bu, with −1≤bu≤1 – representing u’s prior inclination based on its
internal state to believe a message from v. Without loss of generality, we use -1 to
indicate strong disbelief and 1 to indicate strong belief.
bucan be a continuous variable with the interval from strong disbelief to strong
belief, or it can take on discrete values. For simplicity, and also in harmony with
frequently used 7-point Likert scales in public opinion surveys (e.g. [6, 11, 25], and
justified by [56]) for the remainder of this paper we choose 7 discrete equally spaced
values for belief in
B
as follows: we represent the strength of the belief in proposition
B
with elements from the set B={b|0≤b≤6}, b ∈Z; 0 represents strong disbelief, 1
disbelief, 2 slight disbelief, 3 uncertainty, 4 slight belief, 5 belief, and 6 strong belief.
Importantly, this representation captures the polarity of the proposition as well: belief
strength of the affirmative of B(if b≥4), and the negation of B(if b≤2). From here
on, we will capture the idea of belief polarity – belief in or against a proposition
B
– by
simply saying “belief strength.”
We include this cognitive model for an individual agent within a message-passing
ABM: At each time step t, agents have the chance to receive messages, and to believe
and share them with neighbors. We will further clarify the role of messages in spreading
beliefs below, when we describe our diffusion model. But it should be noted upfront
that regardless of being passed by a message, or by simple network connection exposure,
we can compare beliefs from two agents, uand vthe same way. We further note that
cognitive theory shows evidence that, for beliefs that are core to an individual’s identity
(such as political or ideological beliefs), exposure to evidence that is too incongruous
with an individual’s existing belief causes individuals to disregard evidence in an
attempt to reduce cognitive dissonance [19]. Therefore, we later choose an update rule
where an agent uis only likely to believe messages when encoded belief values for
proposition Bare not too far from u’s prior beliefs.
As a simple example, this could be represented by a binary threshold function.
Given an agent uwith belief strength in B,bu, and an incoming belief from vwith
strength bv, the following equation could govern whether agent uupdates its belief:
P(bu=bv|bu) = (1,|bu−bv| ≤ γ,
0, otherwise ,(4)
where γis a distance threshold. Each agent has some existing belief strength in the
proposition
B
, but will be unwilling to change their belief strength if a neighbor’s belief
strength is too far from theirs. Perhaps an agent uwho disbelieves the proposition
(
bu
= 1) will not switch immediately to strongly believing it without passing through an
July 8, 2021 6/28
intermediary step of uncertainty. Given a neighbor vsharing belief bv= 6, agent u
should not adopt this belief strength, because the difference in belief strengths is clearly
greater than
γ
. Simple contagion would fall short because agent
u
may simply randomly
become “infected” with belief strength 6 by vwith some probability p. A complex
contagion would similarly falter if agent
u
were entirely surrounded by alters with belief
strength 6. It would inevitably switch belief strengths regardless of some threshold
α
as
in Eq. (3).
Fig 3.
An illustration of cognitive contagion with the DSS contagion function described
in Eq. 7. (a) (Top) Given an agent uwith bu= 6 and vwith bv= 0, the chance of
contagion is <0.001. (b) (Bottom) Given an agent uwith bu= 6 and vwith bv= 5,
the chance of contagion is 0.982.
As mentioned above, this manner of belief update could model the update of beliefs
that are core to an individual’s identity, such as political or ideological beliefs [28, 57,58].
Rather than updating based on evidence presented, exposure to evidence that is too
incongruous with an individual’s existing belief may have no effect due to
rationalization processes activated by cognitive dissonance [19].
In addition to the effects related to cognitive dissonance, there are other effects that
have been reported to be involved in belief update. Pertinent to the misinformation
literature, two that center the incoming belief itself are the illusory truth effect [22, 51]
and the mere-exposure effect [59]. These effects emphasize that the number of
exposures to a piece of information can motivate belief in it.
Regardless of effect, it is clear that some social contagion processes cannot be
captured without modeling some sort of representation of an agent’s cognition. It is for
this reason that we have developed the cognitive contagion model. We will motivate an
alternative model of contagion that focuses on giving each agent in the network a very
simple cognitive model. In general, given an agent uwith belief bu, during its update
step, the likelihood of updating its belief strength from buto b, given their prior belief,
can be formalized as:
P(bu=b|bu) = β(bu, b).(5)
Because this equation is so general, there is a need to motivate a meaningful choice
of βfunction, and analyze how its effects differ from simple and complex contagion.
There are obviously many choices for such a function, but the key lies in the fact that it
compares beliefs between two agents, rather than being driven by network structure or
mere chance. Below, we will describe our process of choosing a βfunction in order to
adequately model the misinformation effects that motivated our study.
July 8, 2021 7/28
Public Opinion Diffusion Model
Of course, a cognitive contagion model could be implemented on top of many different
ABMs, so we will describe one that captures the misinformation problem, and that we
can use over multiple experiments to arrive at a cognitive contagion model suited to the
problem. Our public opinion diffusion (POD) ABM will be designed to capture the
effects of misinformation spread by media companies, since in the aforementioned
COVID misinformation studies, media played a pivotal role in people’s belief or disbelief
in safety protocols [25, 60, 61]. Often, ABMs do not model so-called “levels” of social
systems: ontological distinctions in abstraction (e.g. those between individuals and
institutions) [52]. The POD model addresses this by including the institutional level in
the form of media agents. A visual description of the model is included in Fig. 4.
Fig 4. A visual description of one time step of the POD model. In the left panel, (A)
depicts the initial setup of a small network with institutional agent i1with subscribers
s1, s2, s3. All agents in the network are labeled with their belief strength. The right
panel, (B) depicts one time step t= 0 of agent i1sending messages
M1(t= 0) = (m0, m1). (i) shows the initial sending of m0= 4 to subscribers, and (ii)
shows s1and s3believing the message and propagating it to their neighbors. (iii ) and
(iv) show the same for m1= 3, but only s3believes m1.
As previously mentioned, we will be using a message-passing ABM: at each time
step t, agents have the chance to receive messages mfrom the set of all possible
messages M– whose spread begins with media agents, which we call institutional
agents – and to believe and share them with neighbors. We chose a message-passing
model as opposed to the simple diffusion models often found in simple and complex
contagion models because it allows us to capture the notion that beliefs are spread by
explicit communication rather than simply by being connected to an agent.
Our model consists of Nagents in a graph, G= (V, E ) where each agent’s initial
belief strength bu,0≤u≤Nis drawn from a uniform distribution over the set of
possible belief strengths for proposition B,B={b, 0≤b≤6}, b ∈Z. There is a
separate set of institutional agents
I
which have directed edges to a set of “subscribers”
S⊆Vif some parameter ≤ |bu−bi|, u ∈V, i ∈I- i.e. an agent in the network will
subscribe to an institution if its belief strength from
B
is sufficiently close to the belief
strength of that institution. For all of our experiments, we will fix at 0. Institutional
agents are designed to model media companies or notable public figures which begin the
mass spread of ideas through the population. The belief strength of an institutional
July 8, 2021 8/28
agent can be thought of as a perceived ideological “leaning” that would cause people
with different prior beliefs to trust different media organizations.
At each time step, t, each institution iwill send a list of messages to each of its
subscribers, represented by the function Mi:t→(m0, m1, ..., mj), mj∈ M, j ≥0. In
this simple example, the set of possible messages Mwill only encode one proposition,
B, so for simplicity, we can set M=B. Additionally, institutions will only send one
message per time step. Whenever a message is received, an agent will “believe” it based
on the contagion method being utilized, where bmj is the strength of belief in
proposition Bencoded by the message, and buis the agent’s belief strength for B. If
agent ubelieves message mj, then its belief strength is updated to be bmj . When an
agent believes a message, it shares the original message, mj, with all its neighbors. It
should be noted that because agent uwould change its belief strength to bmj , agents
will always share beliefs that are congruous to prior belief strengths – cohering to our
cognitive contagion model outlined above. After a neighbor receives a message, the
cycle continues: It has a chance to believe the message, and if believed, spread it to its
neighbors. To avoid infinite sharing, each agent will only believe and share a given
message once based on a unique identifier assigned to it when it is broadcast by the
institutional agent.
Our model and experiments were implemented using NetLogo [62] and Python 3.5.
Code is made available on GitHub 1.
Contagion Experiments
Experiment Design
We wish to show that a model of cognitive contagion, on top of this ABM, can capture
the observed effects of identity-based belief spread better than existing models of simple
or complex contagion. To do so, we will lay out a series of experiments to test each
contagion model given the same initial conditions.
For each model, we test three conditions where one institutional agent,
i1
, attempts
to spread different combinations of messages over time. We will refer to the first
message set as single, as the institutional agent simply broadcasts one message for the
entirety of the simulation;
Mi
(
t
) = (6)
,
1
≤t≤
100. The second set, we will call split, as
the institution switches from messages of Mi(t) = (6),1≤t≤50 to
Mi
(
t
) = (0)
,
51
≤t≤
100 halfway through the simulation. We call the final set gradual
because the institution starts out spreading messages of Mi(t) = (6), but at every
interval of ten time steps, switches to Mi(t) = (5), Mi(t) = (4), etc. until finishing the
last 30 time steps by broadcasting Mi(t) = (0).
These specific sets of messages were chosen to expose distinct effects given each set,
specifically with cognitive contagion in mind. Based on research about how
identity-related beliefs update, a proper model would not allow agents with a belief
strength significantly different from the message to update their belief. With the single
message set, we wish to provide the simplest case: only one belief strength message is
being spread. We hypothesize that simple contagion will simply spread the messages to
all agents, regardless of prior belief strength, and all agents will eventually update their
belief strength to that in the message. We also anticipate that complex contagion, being
so reliant on prior belief strengths of an agent’s neighbors, will not be so
straightforward. Assuming a nearly uniform distribution of agent neighbor belief
strengths, the threshold chosen would likely make the difference between all agents
updating their beliefs, or no agents updating their beliefs. A proper cognitive contagion
1https://github.com/RickNabb/cognitive-contagion
July 8, 2021 9/28
Fig 5. A visual depiction of the different message set conditions used in our
experiments: single (top), split (middle), and gradual (bottom), set against a 100-step
simulation from t= 0 to t= 100.
model that captures our desired effect should see only belief updates from agents whose
belief strengths are already close to the belief strength encoded in the message.
The split message set, on the other hand, should have different effects. We
hypothesize that simple contagion will see all agents believe the first belief, then all
agents believe the second, while complex contagion may spread the initial belief but not
the second. For cognitive contagion, if the function we choose successfully models our
target phenomena, only agents within the same threshold as in the single condition
should believe the first message, then virtually no agents should believe the second –
except a few who may switch based on exposure effects.
Finally, we hypothesize that the gradual message set should be the only which allows
cognitive contagion to sway the entire network. Because agents will only believe
messages that are relatively close to their prior beliefs, it logically follows that the only
way to move beliefs from one pole to another is incrementally. We further hypothesize
that simple contagion will sway the entire agent population to adopt the belief strength
of each message in turn, and that complex contagion may sway the entire population to
one specific belief strength, but then not be able to change any agent belief strengths
after such a contagion.
We also keep certain contagion variables static between experiments and conditions.
In each case, we will fix the simple contagion probability, pto be 0.15, and fix α, the
complex contagion neighbor threshold to be 0.35. The former was chosen to allow a
slower spread of belief strengths, as higher values would make the spread too fast to
properly analyze. The latter was chosen as it, too, would ideally avoid being so low that
contagion happens immediately and in all cases, or so high that it never occurs. In
preliminary experiments, the chosen values best satisfied these goals.
As a final experimental condition to vary independently of message set, we will run
experiments on a host of different graph topologies, keeping the number of nodes and
prior distribution of belief strengths constant. We test each contagion model on four
networks topologies:
•The Erd˝os-R´enyi (ER) random graph [63]
•The Watts-Strogatz (WS) small world network [64]
•The Barab´asi-Albert (BA) preferential attachment network [65]
July 8, 2021 10/28
•The Multiplicative Attribute Graph (MAG) [66]
We explicate rationale behind choosing each below. Running experiments across
different graph topology should allow us to determine how much graph structure affects
contagion. We hypothesize that simple contagion may not vary much over graph type,
that complex contagion will vary the most because it is the most dependent on
neighborhood structure, and that cognitive contagion should vary least, as it relies more
on single instances of neighbor belief strengths rather than an aggregate. Additionally,
for each graph and contagion type, experiments are run ten times and results are shown
as averages over the total simulations, with variances displayed in supplemental
materials (S1-S2) where applicable. Ten simulations appeared adequate in preliminary
experiments because variance was fairly low, and did not seem to require hundreds or
more simulations.
But first, we need to choose a cognitive contagion function that best suits our
empirically-based goals. After choosing such a function, it will be compared against
simple and complex contagion in each condition.
Motivating Choice of βFunctions
Unsurprisingly, depending on the choice of β, the network should display significantly
different dynamics. That choice should depend on what type of phenomena is being
modeled. We will explore a variety of choices for this function and compare the outcome
of their respective cognitive contagion against the qualitative target phenomena.
For our specific purposes, we want to tune the choice of function to capture the
effect of updating beliefs core to an individual’s identity: in a manner where incoming
belief strengths must be close to existing belief strengths to yield an update. If possible,
it would also be useful to be able to choose a function which also captures aspects of
exposure effects: that beliefs do have a small chance to update that grows as the agent
is exposed to a belief over time. However, agents should prioritize the dissonance effect.
That is, they should only experience an exposure effect if incoming messages are already
somewhat close to existing beliefs. The dissonance effect should take precedence.
Keeping these two target phenomena in mind, we explore three classes of functions:
linear, threshold, and logistic.
Linear Functions
To begin with perhaps the simplest function, an inverse linear function would capture
the effect of making beliefs that are further apart have a smaller probability of updating.
Moreover, we can generalize this inverse function by adding some parameters to add a
bias and scalar to the denominator. The equation comes out as follows:
P(bu=b|bu) = β(bu, b) = 1
γ+α|bu−b|(6)
In this equation, γbecomes a parameter to add bias towards being reluctant to
update beliefs, and
α
similarly decreases the probability of update as it increases. This
turns out to be a useful formulation, because if we set
γ
and
α
to be very low, then the
agent becomes relatively more “gullible.” Conversely, setting
γ
and
α
to be high would
make the agent “stubborn.” We hypothesize that the “stubborn” agent will be most
desirable for our purposes of modeling cognitive dissonance.
To compare parameterizations of the inverse linear function, we contrast, on an
Erd˝os-R´enyi random graph, G(N, ρ)=(V, E) with N= 250, ρ = 0.05, in our POD
model setup described above, a relatively “gullible” agent function (γ= 1, α = 0) to a
“normal” function (γ= 1, α = 1), and to a “stubborn” (γ= 10, α = 20) function. We
July 8, 2021 11/28
additionally display results for only the split message set, though results for others can
be found in the supplemental materials (S3-S8). Results are displayed in Fig. 6.
Fig 6. The split message set on an ER random graph, N= 250, ρ = 0.05, for agents
updating their beliefs based on the inverse linear cognitive contagion function in Eq. (6).
Graphs display percent of agents who believe Bwith strength bover time. The left
graph shows agents parameterized to be “gullible” (γ= 1, α = 0); the middle shows
“normal” agents (γ= 1, α = 1), and the right, “stubborn” agents (γ= 10, α = 20).
As expected, the results show that the “gullible” agents simply believe everything.
The “normal” agents take a bit longer to all update their belief strengths to that of the
messages broadcast, but eventually do. Importantly, when all agents’ belief strength for
B
is
b
= 6 after the first 50 time steps, they all quickly switch over to
b
= 0, which does
not fit the cognitive dissonance effect we are trying to model. The “stubborn” agent
case is the closest to what we are seeking. Only the agents who are already closest to
b
= 6 believe the first message over time, with some effect on agents with more distant
beliefs. After the messages switch polarity halfway through, the agents whose strength
of belief is b= 6 do drop significantly, but less so than in the other conditions. This
effect seems closer to the dissonance mixed with exposure effects that we desire: beliefs
are less likely to change as they are far away, but there is a chance to change with many
messages over time.
Threshold Functions
Next, we evaluate the behavior of threshold functions and compare to our desired effect.
Threshold update functions are used in opinion diffusion ABMs, most notably in the
HK bounded confidence model [67
–
69]. We anticipate that these functions will capture
the desired dissonance effect. The function can be parameterized as we already
motivated in Eq. (4), with γserving as the threshold.
Using the same formulations as above, with “gullible” (γ= 6), “normal” (γ= 3),
and “stubborn” (
γ
= 1) agents, we find results on the same graph structure as displayed
in Fig. 7.
These results confirm our anticipations, and perfectly capture the effect of only
updating if incoming messages are within a certain distance of existing beliefs. However,
the threshold function leaves no possibility of update to capture the repetition effects of
mere-exposure or illusory truth. The probability of updating is either 0 or 1, which loses
much of the nuance of the actual phenomena.
Sigmoid Functions
Finally, we will test a logistic function – specifically a sigmoid function, as we are
attempting to capture probabilities and the range of the sigmoid is [0,1]. Sigmoid
functions are commonly used in neural network models because they capture
“activation” effects arguably akin to action-potentials in biological neurons, and rein in
July 8, 2021 12/28
Fig 7. The split message set on an ER random graph, N= 250, ρ = 0.05, for agents
updating their beliefs based on the threshold cognitive contagion function in Eq. (4).
Graphs display percent of agents who believe Bwith strength bover time. The left
graph shows agents parameterized to be “gullible” (γ= 6); the middle shows “normal”
agents (γ= 3); and the right, “stubborn” agents (γ= 1).
outputs so they do not explode while learning [70]. This property is useful for our
purposes as well. We formulate our sigmoid cognitive contagion function as follows:
β(bu, b) = 1
1 + eα(|bu−b|−γ)(7)
In this equation, αand γcontrol the strictness and threshold, respectively. As α
increases, the function looks more like a binary threshold function, and restricts any
significant probability to center around γ−1. γcontrols the threshold value by
translating the function on the xaxis. Though, in a sigmoid function, the value at
x
=
γ
will always be 0.5, so if one wishes to guarantee belief update given a threshold
τ
,
it must be set as τ=γ+where ∝1
α. We use this strategy to set our γvalues
throughout our experiments.
Given the same experiments as above, the sigmoid function parameterized for
different agent types (“gullible” (
α
= 1
, γ
= 7), “normal” (
α
= 2
, γ
= 3), and “stubborn”
(α= 4, γ = 2)) yields results as show in Fig. 8.
Fig 8. The split message set on an ER random graph, N= 250, ρ = 0.05, for agents
updating their beliefs based on the sigmoid cognitive contagion function in Eq. (7).
Graphs display percent of agents who believe Bwith strength bover time. The left
graph shows agents parameterized to be “gullible” (α= 1, γ = 7), the middle shows
“normal” agents (α= 2, γ = 3), and the right, “stubborn” agents (α= 4, γ = 2).
These results seem to display characteristics of both the inverse linear function and
the threshold function. The “gullible” agents, as always, believe everything but with a
softer transition than for the linear or threshold functions. “Normal” agents are the first
indication that we are getting closer to our desired effect. After an initial widespread
uptake of belief strength b= 6 in the first half of the simulation, some agents begin to
July 8, 2021 13/28
believe
b
= 0, but the population that decreases the most to engender that gain seem to
be agents with belief strength b= 1. Though, from our model, some agents with
strength
b
= 6 must have believed
b
= 0 messages, because if they did not, the messages
would never have been shared and made it to
b
= 1 agents – the institutional agents are
only connected to b= 6 agents as = 0.
Finally, the “stubborn” agents seem to best capture our desired effect. Initially,
b= 6, b = 5,and b= 4 agents are the only who update beliefs. There is also a small
effect where some
b
= 3 agents update to
b
= 6, capturing the exposure effects combined
with a dissonance effect. Importantly, when messages switch to
b
= 0, none of the
b
= 6
agents update their beliefs. The exposure effects would not work as the dissonance
effect would take primacy.
Given these initial experiments, it seems most reasonable to choose the “stubborn”
sigmoid cognitive contagion function as that which best captures our desired effects. We
will use this defensive cognitive contagion (DCC) function in the rest of our experiments
as we compare the effects of cognitive contagion to those of simple and complex
contagion.
Comparing Contagion Models
Now that we have selected a cognitive contagion function that best captures the effects
we wish to model, it is necessary to compare results of this function to those from simple
and complex contagion. As a reminder, for simple contagion, we are using an infection
probability of p= 0.15, and for complex contagion, a ratio of α= 0.35. We will
investigate the way that these different contagion models manifest effects on different
network structures. In addition to significant effects based on the choice of the β
function, we hypothesize that effects will also significantly differ based on the structure
of the network. The structure will determine which ideas reach agents, and which do
not, and thus should affect the final outcome of belief distribution over the network.
We will test each contagion model on five types of networks: the Erd˝os-R´enyi (ER)
random graph [63], the Watts-Strogatz (WS) small world network [64], the
Barab´asi-Albert (BA) preferential attachment network [65], and the Multiplicative
Attribute Graph (MAG) [66]. Each network has distinct properties that will affect how
the contagions play out. Additionally, we will test each message set for each network
type to explore the effects of different influence strategies.
Contagion on ER Random Networks
We will begin with a contagion on an Erd˝os-R´enyi [63] random network
G
(
N, ρ
) = (
V, E
) where
ρ
= 0
.
05 and
N
= 500. Note that
ρ
is not the chance of simple
contagion,
p
, but the chance that two agents connect in the random graph. This graph
type was chosen as a baseline to compare others to, as is standard in network science.
Results of the simple and complex contagions are shown below in Fig. 9.
Results from the simple contagion experiments show that in each message set
condition, the belief strength being spread pervaded the entire network in all cases.
Moreover, the strength of beliefs broadcast were adopted by the population very quickly.
In the case of complex contagion, the initial distributions of belief strengths did not
change in any message set condition. This is likely due to the fact that in an ER
random graph, nodes are likely to have a uniform distribution of neighbors (which, for
seven values of B, would yield a 0.143 ratio for each b), so the complex contagion
threshold of 0.35 could not be reached given any message’s bvalue.
Cognitive contagion on the ER graphs yielded markedly different results. As
depicted in Fig. 10, both the single and split message sets were only able to sway agents
July 8, 2021 14/28
Fig 9. Simple (top row) and complex (bottom row) contagion on a random network
with
N
= 500, and connection chance
ρ
= 0
.
05. Graphs show the percent of agents who
believe Bwith strength bover time.
that started with b= 6, b= 5, or b= 4, with what appears to be a few b= 3 agents
persuaded. Importantly, no agents were swayed after the messaging change in the split
condition. The gradual message set is the only that was able to sway all agents over to
b= 0.
Fig 10. DCC on a random network with N= 500, and connection chance ρ= 0.05.
Graphs show the percent of agents who believe Bwith strength bover time.
Contagion on WS Small World Networks
Our second set of experiments were conducted on Watts-Strogatz [64] small world
networks, G(N, k, ρ) = (V, E), where N= 500, k = 5, and ρ= 0.5. In this formulation,
kis the number of initial neighbors any node is connected to, and ρis the chance of
rewiring any edge. We chose this graph topology because small world networks exhibit
some attributes of real-world social networks (low diameter and triadic closure). Results
of simple and complex contagions on these WS graphs are shown in Fig. 11.
Simple contagion results showed largely the same pattern as in the ER random
graph, but a significantly slower spread through the population. This is likely due to
the fact that the WS graphs had a total of 2,500 edges, where the rounded expected
number of edges in the ER graphs was 6,238. Interestingly, complex contagion was
successful on the WS graphs, but with a high amount of variance over simulations
(variance shown in S1). With only 5 neighbors, the barrier to fulfilling the 0.35
July 8, 2021 15/28
Fig 11. Simple (top row) and complex (bottom row) contagion on a Watts-Strogatz
small world network with
N
= 500, initial neighbors
k
= 5, and rewiring chance
ρ
= 0
.
5.
Graphs show the percent of agents who believe Bwith strength bover time. Asterisks
(*) denote these contagions had significant variance over simulation iterations.
threshold for belief is much easier - requiring only 2 neighbors who match the message’s
bvalue. This likely led to contagion whereas the threshold given an expected neighbor
count in the ER graph of 25 is much harder to satisfy with 9 neighbors.
Results from cognitive contagion experiments closely match those from the ER
random graph experiments. These are displayed in Fig. 12. The population displayed
the same patterns given a significantly different graph structure, which begs
explanation. We will discuss this further below.
Fig 12.
DCC on a Watts-Strogatz small world network with
N
= 500, initial neighbors
k= 5, and rewiring chance ρ= 0.5. Graphs show the percent of agents who believe B
with strength bover time.
Contagion on BA Preferential Attachment Networks
We continued our experiments by testing Barab´asi-Albert [65] preferential attachment
networks, G(N, m)=(V, E ), where N= 500 and m= 3. mrepresents the number of
edges added with each newly added node. This network type was also chosen because of
its properties that closely resemble real-world social networks (low diameter, power law
degree distribution). Results are shown in Fig. 13.
Again, simple contagion results are similar to those of the WS graphs: slower
contagion than in ER random graphs, but faster than in WS graphs. This is interesting
July 8, 2021 16/28
Fig 13. Simple (top row) and complex (bottom row) contagion on a Barab´asi-Albert
preferential attachment network with N= 500, and added edges m= 3. Graphs show
the percent of agents who believe Bwith strength bover time. Asterisks (*) denote
these contagions had significant variance over simulation iterations.
because the number of edges in each BA graph would be 1,500 - clearly less than our
WS graphs - yet spread was faster. In this case, spread is facilitated by the power law
distribution of node degree – even a few nodes with high degree believing the message
can have an outsize effect in spreading quickly to the outskirts of the network [71]. Also
interestingly, complex contagion appears to have slight effects on these graphs, with the
gradual message set having most effect. Complex contagion results also showed
significant variance (shown in S2). Perhaps the power law degree distribution also
created nodes able to be influenced with fewer edges than in the ER graph. But since
node degree was more distributed than in a WS graph, it logically follows that complex
contagion was not as ubiquitous for all nodes.
Again, results from cognitive contagion experiments were incredibly similar to those
of the ER random and WS graphs – most similar to results from the WS graph. Results
are shown in Fig. 14.
Fig 14.
DCC on a Barab´asi-Albert preferential attachment network with
N
= 500, and
added edges m= 3. Graphs show the percent of agents who believe Bwith strength b
over time.
July 8, 2021 17/28
Contagion on MAG Networks
Finally, we tested contagion on the Multiplicative Attribute Graph [66] with an affinity
matrix Θ
b
that yielded a graph with very high homophily. We chose this graph topology
because real world social networks are homophilic – people who have similar interests
tend to connect. Testing a highly homophilic graph (higher than in a real social
network) can allow us to test the extreme case of communities in silos based on their
belief strength. The affinity matrix was constructed as follows:
Θb= (θij )∈Rm×n=1
1 + 50(j−i)2(8)
=
0.167 0.018 0.005 0.002 0.001 0.0008 0.0006
0.018 0.167 0.018 0.005 0.002 0.001 0.0008
0.005 0.018 0.167 0.018 0.005 0.002 0.001
0.002 0.005 0.018 0.167 0.018 0.005 0.002
0.001 0.002 0.005 0.018 0.167 0.018 0.005
0.0008 0.001 0.002 0.005 0.018 0.167 0.018
0.0006 0.0008 0.001 0.002 0.005 0.018 0.167
(9)
To measure homophily, we used a simple measure of the global average neighbor
distance given the
b
value of each node, and compared against a random ER graph. The
measure is detailed in Eq. (10):
h(G= (V, E )) = P
v∈VP
u∈N(v)
|bu−bv|
2|V|2,(10)
where N(v) is a function that returns neighbors u∈Vof v. Over ten ER random
graphs with
N
= 500 and
ρ
= 0
.
05, the mean average neighbor distance was 2.30 with a
mean variance of 0.349. Over ten MAG graphs generated with Θb, the mean average
neighbor distance was 0.31 with a mean variance of 0.02. Results from simple and
complex contagion on these homophilic MAG graphs are shown in Fig. 15.
Fig 15.
Simple (top row) and complex (bottom row) contagion on a homophilic MAG
network with N= 500, and Θbdetailed in Eq. (9). Graphs show the percent of agents
who believe Bwith strength bover time.
As it turns out, a high degree of homophily did not appear to make a significant
difference to any contagion patterns. The patterns generated from simple and complex
July 8, 2021 18/28
contagion appear incredibly similar to those from the ER random graphs. The same is
true for cognitive contagion results, which are depicted in Fig. 16. This notion makes
sense for simple contagion, because as long as the expected number of neighbors for
each node is similar to that of our ER graphs, the belief strengths of neighbors make no
difference. For complex contagion, it also makes sense that no agents seemed to update
their belief strength because they are surrounded almost entirely by other agents of the
same belief strength – meeting the 0.35 threshold for any message would be infeasible.
But the result for cognitive contagion may be surprising, as it still resembles the same
pattern seen in each prior experiment with only slight variation.
Fig 16.
DCC on a homophilic MAG network with
N
= 500, and Θ
b
detailed in Eq. (9).
Graphs show the percent of agents who believe Bwith strength bover time.
Analysis of Results
Results across different graph toplogies further support our motivations for introducing
cognitive contagion models. Across Erd˝os-R´enyi random graphs [63], Watts-Strogatz
small world networks [64], Barab´asi-Albert preferential attachment networks [65], and
highly homophilic Multiplicative Attribute Graphs [66], simple and complex contagions
had varying results. Due to highly varied graph structure, it makes sense that contagion
models which are very reliant on structure yield differing results.
Our results from the DCC cognitive contagion model do not show such variation. In
fact, across all graph types tested, it appears that the cognitive contagion results do not
significantly differ. These results are encouraging because the entire motivation for our
model takes out the structural dependencies of complex contagion, and replaces simple
contagion’s random chance of spread with one motivated by what agents already believe.
Thus the key factors in our cognitive contagion model are:
(1) Whether or not any given agent is exposed to some message;
(2) How many times an agent is exposed to similar messages; and
(3) The difference between agent beliefs and that message.
These results qualitatively match what has been observed in misinformation
literature. Even when exposed to factual or scientific evidence (e.g. that wearing masks
would mitigate the COVID-19 pandemic), people who are already skeptical of
mask-wearing are not able to be swayed. They often instead rationalize their existing
beliefs [2, 21, 28,58]. Additionally, mass-exposure to a given message still has a chance to
sway agents in our model – proportionally to how distant that message is to agent
beliefs. This captures the illusory truth [22, 51] and mere-exposure effects [59].
We can also quantify this result. For any message mjfrom a list
Mi:t→(m0, m1, ..., mj), mj∈ M, j ≥0 sent by an institution iat time step t, there
July 8, 2021 19/28
will be a probability that any agent uwill believe the message, which depends on the
factors listed above. In terms of our model,
(1) becomes the probability of mibeing received and believed by any neighbor N(u)
of u;
To travel from institution ito agent u, a message must follow a directed path
through the graph, wiu = (v1, v2, ..., vn) where v1=iand vn=u. The probability of a
message being passed down the entire path can be expressed as:
P(mj, w) = Y
v∈w
β(v, mj).(11)
To then properly represent (1), we can limit wiu to end at neighbors of u.
The next step requires us to determine how many neighbors
N
(
u
) of
u
believe
mj
–
as they would then subsequently propagate the message to u. Therefore,
(2)
becomes
|Nβ
(
i, u, mj
)
|
, the number of neighbors of
u
who are likely to believe
mj
coming from i.
However, we do not know ahead of time which neighbors will actually believe
mj
as
the model is stochastic. We can argue that a probability within some
δ
will suffice. We
can represent Nβ(i, u, mj) as:
Nβ(i, u, mj) = {v|v∈N(u)∧ P(wiu, mj)≥1−δ},(12)
Because in the POD model, agents can only believe and share
mj
once, to determine
how many neighbors of uwould share the message, we must choose a set of
non-overlapping paths W∗from ito neighbors of u– moreover, one that maximizes
total path probabilities:
W∗
iu = max
wYP(mj, w){wiu |[
wiu
(·)=Ø}(13)
The algorithmic formulation of such a process would best be captured in future work.
Regardless of methods, it stands that P(mj, w) is crucial in determining whether
any agent uwill have a chance of receiving a message and believing it. This result can
help explain why simple and complex contagion showed such variation across graph
types, but cognitive contagion did not. Given the probabilities of adopting belief
strengths
bu
given a prior belief of
bv
for DCC shown in Table 1, it becomes clear that if
any β(v, mj) is given a belief strength difference of 3 or higher, then the entire chain’s
probability P(mj, w) will collapse to very close to 0.
bu/bv0 1 2 3 4 5 6
0 0.999 0.982 0.500 0.018 <0.001 <0.001 <0.001
1 0.982 0.999 0.982 0.500 0.018 <0.001 <0.001
2 0.500 0.982 0.999 0.982 0.500 0.018 <0.001
3 0.018 0.500 0.982 0.999 0.982 0.500 0.018
4<0.001 0.018 0.500 0.982 0.999 0.982 0.500
5<0.001 <0.001 0.018 0.500 0.982 0.999 0.982
6<0.001 <0.001 <0.001 0.018 0.500 0.982 0.999
Table 1. Probabilities given by β(bu, bv) for the DCC function, described in Eq. (5).
To satisfy (1), a path of agents with belief strengths at most distance 1 away from
the message would reliably transmit it with high probability. If any agents in the path
July 8, 2021 20/28
have a belief of distance 2 from the message, the transmission probability would
decrease; halving the probability with each agent of distance 2. Compare this with
simple contagion, where every agent has a flat 0.15 probability of sharing, and the path
probability converges close to 0 in only two steps; or complex contagion where if any
agent in the path does not meet the threshold of 0.35, the path probability immediately
collapses to 0.
Taking this criteria for (1) into account, Table 1 also makes clear that to satisfy (2)
and (3), the chain may only need to end with one agent – i.e.
|W∗
iu|
= 1. If the message
belief strength is distance 0 or 1 from
u
, then there is already a near guaranteed chance
that uwill believe the message after receiving it only once. Conversely, in any
quantitative analysis of which agents may believe a message, we can exclude with high
confidence all agents with a belief strength difference of 3 or higher from consideration,
as their chances of believing the message even if it made it to them would be near zero.
Therefore, to quantitatively demonstrate why the DCC results were so stable across
random graph types, we can show, for a randomly selected agent
u
with belief strength
bu
, the percentage of 100 random generations of the graph which yield at least one path
of agents
v
entirely with
bv
distance at most
τ
away from a message with belief strength
bmj . Moreover, we can show this for all potential values of bu, keeping the belief
strength of the message constant, bmj, at 6 (quantifying the single message condition).
Importantly, this path does not include
u
because for
bu
= 3 and below, the distance of
bu
to
bmj
would always be too high; thus, our paths lead to neighbors of
u
. These paths
were found by assigning edge weights equal to the distance between the message and the
belief strength of the source node in the directed pair, and running Dijkstra’s weighted
shortest path algorithm with ias the source and uas the target.
(τ= 1) bu= 0 bu= 1 bu= 2 bu= 3 bu= 4 bu= 5 bu= 6
ER 0.91 0.9 0.97 0.95 1.0 1.0 1.0
WS 0.51 0.56 0.59 0.7 0.64 0.62 1.0
BA 0.73 0.66 0.66 0.75 0.63 0.75 1.0
MAG 0.13 0.2 0.37 0.54 0.8 1.0 1.0
(τ= 2) bu= 0 bu= 1 bu= 2 bu= 3 bu= 4 bu= 5 bu= 6
ER 0.98 0.99 0.97 1.0 1.0 1.0 1.0
WS 0.65 0.8 0.76 0.72 0.76 0.81 1.0
BA 0.84 0.89 0.84 0.82 0.89 0.88 1.0
MAG 0.29 0.29 0.45 0.84 0.99 1.0 1.0
Table 2. Proportion of 100 random graphs with at least one path leading from the
institutional agent ito a randomly selected node uwith belief strength bu, where each
agent vin the path has belief strength |bv−bmj | ≤ τ, and bmj = 6.
When τis 1 the path yields an almost guaranteed probability of the message
reaching u. When τequals 2, the path can yield a range of chances to reach u–
depending on how many distances of 2 there are in the path, each inserting a
probability of 0.5 into the total product. However, compared to the path probability in
simple or complex contagion – where the former is a path entirely of probabilities of
0.15, and the latter requiring meeting a threshold of 0.35 for each agent in the path to
even have a non-zero chance – both path types yield significantly higher probabilities of
messages reaching target agents.
Moreover, the analysis shows that across graph types, where there are high
proportions of both path types present, there is a high likelihood for messages to reach
agents of all belief strengths. Particularly for Erd˝os-R´enyi random graphs, both path
types are almost always present. Both Watts-Strogatz and Barab´asi-Albert networks
July 8, 2021 21/28
show lower, but still high proportions of both path types being present. This likely
accounts for the slight variations in cognitive contagion results displayed in graphs
above. Predictably, homophilic MAG graphs show a decreasing likelihood of both path
types as the distance between buand bmj increases. If any of these paths yielded a
message reaching agent
u
, then combined with the probabilities in Table 1, we see that
target agents with belief strength distance 2 or less from the message will likely believe
it, and update accordingly. This is exactly what we see qualitatively in the above
results, as regardless of graph type, agents with belief strength 4 or higher quickly adopt
a stronger belief of 6 from the message, and agents with belief strength of 3 eventually
update after enough messages reach them.
Discussion
From our experiments, it is clear that the three types of social contagion affect
populations differently given the same initial conditions and beliefs to spread. We were
able to show, as predicted, that simple and complex contagion methods change agent
belief strengths in a manner that does not depend on what they were believing
previously. In the single and split message set conditions, many agents’ belief strengths
were able to be swayed from value-to-value regardless of their initial belief. Thus, these
contagion models do not capture the cognitive phenomena that motivated our
experiments.
On the other hand, our simple cognitive contagion model also performed as expected.
The results were fairly robust across several graph topologies. In the single and split
message set conditions, most agents following the DCC function did not change their
belief strength over time. This fits the underlying social theory because the messages
were too far from what agents initially believed, so not updating their beliefs accurately
models the defensive or entrenching effects observed when people are exposed to
identity-related beliefs that they do not agree with [22,28]. The only message set
condition that was able to sway the entire population in the cognitive contagion
condition was the gradual set.
The POD model with DCC also appears to capture population-level trends in
opinion data that originally motivated our study. Results match the partisan
polarization phenomena being observed [6
–
9, 26], as agents who only update their belief
strengths in this manner are highly unlikely to be swayed by belief strengths that are
too far from theirs. Once swayed in one direction or another, our agents could not
adopt significantly differing belief strengths without being nudged along.
Given these similarities in results, even our very simple model may be able to lend
insight into potential ways to deal with spread of conspiracy beliefs – though we in no
way mean for these to be taken as policy recommendations. Our analysis revealed that
the three most important factors in swaying any agent’s belief were (1) whether or not
an agent is exposed to a message, (2) number of exposures, and (3) the difference
between prior agent beliefs and those expressed in the message. Even if (1) and (2) are
met, as in some attempts to debunk misinformation [5], (3) would prevent staunch
conspiracy believers from changing beliefs if exposed to a contradictory message. Some
analyses attempt to focus on the network structure [2,15,72] – i.e. (1) and (2) – without
acknowledging that individual psychology is just as important – as in (3). Our model,
which captures both network effects and individual effects, therefore gives novel insights
into a more holistic intervention. Our analysis of results showed that for a highly
homophilic network (a trait present in real social networks), certain messages have a
slim chance of reaching those with certain beliefs. Any intervention would need to take
this into account, and imagine what types of messages would be most likely to reach
certain populations.
July 8, 2021 22/28
Moreover, on the individual level, an intervention under our model would have to
gradually nudge the agent away from an undesirable belief strength. This brings the
individual into the debate over interventions. The only message sets from our
experiments that successfully swayed all agents in the population were those which
gradually eased agents from one belief polarity to another. It should be noted that
belief change tactics aimed at individuals’ psychology are already widely considered and
used by private and public institutions to manipulate target population beliefs, but not
yet in the case of domestic misinformation [30,73,74]. The ethics of these interventions –
often targeted at anti-radicalization, voter manipulation, or behavior change for
economic gain – are clearly fraught. This begs more analysis of the ethics of
intervention techniques, but such an endeavor is easily outside the scope of this report.
Limitations and Future Work
These results are in their early stages. While the motivation for models of cognitive
contagion exists, there are several steps that still must be taken in order to flesh the
model out further. For instance, in the social science literature that backs
misinformation and identity-based reasoning effects, there is a great deal of evidence
backing belief effects that rely on in/out-group effects [14, 21, 57], trust in message
sources [10], more nuanced belief structures [28, 75–77], or effects of emotion on social
contagion [14,29]. These theories and findings could motivate more complex agent
cognitive models than either the simple sigmoid distance function, or the singular
proposition, we used in our model.
Further, a more rigorous application of the model to empirical population-level belief
changes would help in verifying the legitimacy of the model’s results. Using both real
network structures, such as snapshots of social media networks, and real institutional
message data, such as tweets or posts from notable figures, would be steps forward for
this goal. While ABMs have been used to model spread of misinformation [15], social
media messages [72, 78], and value-laden topics [79], it appears that few have verified
outcomes against ground truth. Among other reasons, this is likely due to the fact that
such data seems difficult to obtain.
There is great promise for computational social scientific tools like ABM to leverage
computational power to tackle complex social problems previously limited to thought
experiments and small experiments – such as modeling misinformation spread. However,
if the promise is to be fulfilled, greater pains must be taken to motivate and empirically
ground both individual agent models, and global network structures and population
models. But this work is a step towards establishing fruitful collaboration between the
computational modeling community, and social scientists, in order to tackle one of the
greatest political challenges of our time.
Conclusion
This paper lays out two major contributions: a cognitive contagion model for
identity-related belief spread, and a Public Opinion Diffusion (POD) model in which
external, institutional agents (modeling media companies) dictate influence of internal
agent beliefs. The cognitive contagion model, by giving each individual agent a cognitive
model to direct belief update, allows a level of expressiveness above existing simple and
complex contagion models. After proposing the cognitive contagion model, we compared
potential contagion functions to arrive at one capturing misinformation spread – what
we called a Defensive Cognitive Contagion (DCC) function – which adequately captured
the cognitive dissonance and exposure effects referenced in empirical literature. This
allowed us to run simulation models of networked populations of agents whose belief
July 8, 2021 23/28
strength in a given proposition is influenced by an external agent. Across several graph
topologies, keeping the POD model consistent, we compared simulation results for
simple contagion, complex contagion, and our DCC function. Analysis of these results
revealed that our cognitive contagion model is much less sensitive to graph topology
than either other contagion method. It showed that the crucial factor in belief change,
was not only who surrounds a given agent, but the content of any message and its
relation to the agent’s prior belief. We concluded by motivating potential interventions
to correct misinformation and conspiracy beliefs that address the individual and the
network holistically, rather that only the network they are embedded in.
Acknowledgements
We thank the Tufts Data Intensive Studies Center (DISC) for the seed grant funding
that supported this research. LC was additionally supported by NSF grant 1934553.
References
1. Bursztyn L, Rao A, Roth C, Yanagizawa-Drott D. Misinformation during a
pandemic. University of Chicago, Becker Friedman Institute for Economics
Working Paper. 2020;(2020-44).
2. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The
spreading of misinformation online. Proceedings of the National Academy of
Sciences. 2016;113(3):554–559.
3. Uscinski JE, Enders AM, Klofstad C, Seelig M, Funchion J, Everett C, et al.
Why do people believe COVID-19 conspiracy theories? Harvard Kennedy School
Misinformation Review. 2020;1(3).
4. Kouzy R, Abi Jaoude J, Kraitem A, El Alam MB, Karam B, Adib E, et al.
Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on
Twitter. Cureus. 2020;12(3).
5. Brennen JS, Simon F, Howard PN, Nielsen RK. Types, sources, and claims of
Covid-19 misinformation. Reuters Institute. 2020;7:3–1.
6. Schaeffer K. A look at the Americans who believe there is some truth to the
conspiracy theory that COVID-19 was planned; 2020. Available from:
https://www.pewresearch.org/fact-tank/2020/07/24/
a-look-at-the- americans-who-believe-there-is-some-truth-to- the-conspiracy-theory-that-covid- 19-was-planned/
.
7. Clinton J, Cohen J, Lapinski JS, Trussler M. Partisan Pandemic: How
Partisanship and Public Health Concerns Affect Individuals’ Social Distancing
During COVID-19. Available at SSRN 3633934. 2020;.
8. Bakshy E, Messing S, Adamic LA. Exposure to ideologically diverse news and
opinion on Facebook. Science. 2015;348(6239):1130–1132.
9. Conover MD, Ratkiewicz J, Francisco M, Gon¸calves B, Menczer F, Flammini A.
Political polarization on twitter. In: Fifth international AAAI conference on
weblogs and social media; 2011.
10.
Swire-Thompson B, Lazer D. Public health and online misinformation: challenges
and recommendations. Annual Review of Public Health. 2020;41:433–451.
July 8, 2021 24/28
11. Jurkowitz M, Mitchell A. Fewer Americans now say media exaggerated
COVID-19 risks, but big partisan gaps persist; 2020. Available from:
https://www.journalism.org/2020/05/06/
fewer-americans-now-say- media-exaggerated-covid-19-risks-but-big-partisan- gaps-persist/
.
12.
Bago B, Rand DG, Pennycook G. Fake news, fast and slow: Deliberation reduces
belief in false (but not true) news headlines. Journal of experimental psychology:
general. 2020;.
13.
Pennycook G, Rand DG. Lazy, not biased: Susceptibility to partisan fake news is
better explained by lack of reasoning than by motivated reasoning. Cognition.
2019;188:39–50.
14. Van Bavel JJ, Baicker K, Boggio PS, Capraro V, Cichocka A, Cikara M, et al.
Using social and behavioural science to support COVID-19 pandemic response.
Nature Human Behaviour. 2020; p. 1–12.
15. Brainard J, Hunter P, Hall IR. An agent-based model about the effects of fake
news on a norovirus outbreak. Revue d’
´
Epid´emiologie et de Sant´e Publique. 2020;.
16. Kopp C, Korb KB, Mills BI. Information-theoretic models of deception:
Modelling cooperation and diffusion in populations exposed to” fake news”. PloS
one. 2018;13(11).
17. Ehsanfar A, Mansouri M. Incentivizing the dissemination of truth versus fake
news in social networks. In: 2017 12th System of Systems Engineering
Conference (SoSE). IEEE; 2017. p. 1–6.
18. Maghool S, Maleki-Jirsaraei N, Cremonini M. The coevolution of contagion and
behavior with increasing and decreasing awareness. PloS one.
2019;14(12):e0225447.
19. Festinger L. A theory of cognitive dissonance. vol. 2. Stanford university press;
1957.
20. Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting COVID-19
misinformation on social media: Experimental evidence for a scalable
accuracy-nudge intervention. Psychological science. 2020;31(7):770–780.
21. Van Bavel JJ, Pereira A. The partisan brain: An identity-based model of
political belief. Trends in cognitive sciences. 2018;22(3):213–224.
22. Swire-Thompson B, DeGutis J, Lazer D. Searching for the backfire effect:
Measurement and design considerations. 2020;.
23. Christakis NA, Fowler JH. Social contagion theory: examining dynamic social
networks and human behavior. Statistics in medicine. 2013;32(4):556–577.
24. Centola D, Macy M. Complex contagions and the weakness of long ties.
American journal of Sociology. 2007;113(3):702–734.
25. Mitchell A, Jurkowitz M, Oliphant JB, Shearer E. Three Months In, Many
Americans See Exaggeration, Conspiracy Theories and Partisanship in COVID-19
News; 2020. Available from: https://www.journalism.org/2020/06/29/
three-months-in-many- americans-see-exaggeration-conspiracy-theories-and-partisanship-in- covid-19-news/
.
26. Gramlich J. 20 striking findings from 2020; 2020. Available from:
https://www.pewresearch.org/fact-tank/2020/12/11/
20-striking-findings-from- 2020/.
July 8, 2021 25/28
27.
Shannon J. ’It’s not real’: In South Dakota, which has shunned masks and other
COVID rules, some people die in denial, nurse says. USA Today;.
28. Porot N, Mandelbaum E. The science of belief: A progress report. Wiley
Interdisciplinary Reviews: Cognitive Science. 2020; p. e1539.
29.
Brady WJ, Crockett M, Van Bavel JJ. The MAD model of moral contagion: The
role of motivation, attention, and design in the spread of moralized content online.
Perspectives on Psychological Science. 2020;15(4):978–1010.
30. Wiley C. Mindf*ck: Cambridge Analytica and the Plot to Break America.
Random House/Penguin Random House LLC; 2019.
31. Lippmann W. Public opinion. vol. 1. Transaction Publishers; 1946.
32. Bernays EL. Propaganda. Ig publishing; 2005.
33. Kramer AD, Guillory JE, Hancock JT. Experimental evidence of massive-scale
emotional contagion through social networks. Proceedings of the National
Academy of Sciences. 2014;111(24):8788–8790.
34. Fowler JH, Christakis NA. Cooperative behavior cascades in human social
networks. Proceedings of the National Academy of Sciences.
2010;107(12):5334–5338.
35.
Adamic LA, Lento TM, Adar E, Ng PC. Information evolution in social networks.
In: Proceedings of the ninth ACM international conference on web search and
data mining; 2016. p. 473–482.
36.
Kearney MD, Chiang SC, Massey PM. The Twitter origins and evolution of the
COVID-19 “plandemic” conspiracy theory. Harvard Kennedy School
Misinformation Review. 2020;1(3).
37. Pfeffer J, Malik MM. Simulating the dynamics of socio-economic systems. In:
Networked Governance. Springer; 2017. p. 143–161.
38. Reynolds CW. Flocks, herds and schools: A distributed behavioral model. In:
Proceedings of the 14th annual conference on Computer graphics and interactive
techniques; 1987. p. 25–34.
39.
Scheutz M. Artificial Life Simulations: Discovering and Developing Agent-Based
Models. In: Model-Based Approaches to Learning. Brill Sense; 2009. p. 261–292.
40. Scheutz M, Schermerhorn P, Connaughton R, Dingler A. Swages-an extendable
distributed experimentation system for large-scale agent-based alife simulations.
Proceedings of Artificial Life X. 2006; p. 412–419.
41. Ferreira GB, Scheutz M. Accidental encounters: can accidents be adaptive?
Adaptive Behavior. 2018;26(6):285–307.
42. Ferreira GB, Scheutz M, Levin M. Modeling Cell Migration in a Simulated
Bioelectrical Signaling Network for Anatomical Regeneration. In: Artificial Life
Conference Proceedings. MIT Press; 2018. p. 194–201.
43.
Schelling TC. Dynamic models of segregation. Journal of mathematical sociology.
1971;1(2):143–186.
44.
Goffman W, Newill VA. Generalization of epidemic theory: An application to the
transmission of ideas. Nature. 1964;204(4955):225–228.
July 8, 2021 26/28
45.
Centola D, Willer R, Macy M. The emperor’s dilemma: A computational model
of self-enforcing norms. American Journal of Sociology. 2005;110(4):1009–1040.
46. Centola D, Egu´ıluz VM, Macy MW. Cascade dynamics of complex propagation.
Physica A: Statistical Mechanics and its Applications. 2007;374(1):449–456.
47.
Zhang H, Vorobeychik Y. Empirically grounded agent-based models of innovation
diffusion: a critical review. Artificial Intelligence Review. 2019;52(1):707–741.
48. Jackson JC, Rand D, Lewis K, Norton MI, Gray K. Agent-based modeling: A
guide for social psychologists. Social Psychological and Personality Science.
2017;8(4):387–395.
49. Bonabeau E. Agent-based modeling: Methods and techniques for simulating
human systems. Proceedings of the national academy of sciences. 2002;99(suppl
3):7280–7287.
50. Granovetter M. Threshold models of collective behavior. American journal of
sociology. 1978;83(6):1420–1443.
51. Begg IM, Anas A, Farinacci S. Dissociation of processes in belief: Source
recollection, statement familiarity, and the illusion of truth. Journal of
Experimental Psychology: General. 1992;121(4):446.
52. Epstein B. Agent-based modeling and the fallacies of individualism. Models,
simulations, and representations. 2012;9:115–144.
53.
Kiesling E, G¨unther M, Stummer C, Wakolbinger LM. Agent-based simulation of
innovation diffusion: a review. Central European Journal of Operations Research.
2012;20(2):183–230.
54. Ryan B, Gross NC. The diffusion of hybrid seed corn in two Iowa communities.
Rural sociology. 1943;8(1):15.
55.
Guilbeault D, Becker J, Centola D. Complex contagions: A decade in review. In:
Complex spreading phenomena in social systems. Springer; 2018. p. 3–25.
56. Cox III EP. The optimal number of response alternatives for a scale: A review.
Journal of marketing research. 1980;17(4):407–422.
57. Pereira A, Van Bavel J. Identity concerns drive belief in fake news. 2018;.
58. Bail CA, Argyle LP, Brown TW, Bumpus JP, Chen H, Hunzaker MF, et al.
Exposure to opposing views on social media can increase political polarization.
Proceedings of the National Academy of Sciences. 2018;115(37):9216–9221.
59. Zajonc RB. Attitudinal effects of mere exposure. Journal of personality and
social psychology. 1968;9(2p2):1.
60. Talev M. Axios-Ipsos poll: The skeptics are growing; 2020. Available from:
https://www.axios.com/
axios-ipsos-poll-gop- skeptics-growing-deaths-e6ad6be5-c78f-43bb-9230-c39a20c8beb5.
html.
61.
Gertz M. Six different polls show how Fox’s coronavirus coverage endangered its
viewers; 2020. Available from: https://www.mediamatters.org/fox-news/
six-different-polls-show- how-foxs-coronavirus-coverage-endangered-its-viewers
.
62. Wilensky U. NetLogo itself; 1999. http://ccl.northwestern.edu/netlogo/.
July 8, 2021 27/28
63. Erd˝os P, R´enyi A. On the evolution of random graphs. Publ Math Inst Hung
Acad Sci. 1960;5(1):17–60.
64. Watts DJ, Strogatz SH. Collective dynamics of ‘small-world’networks. nature.
1998;393(6684):440–442.
65. Barab´asi AL, Albert R. Emergence of scaling in random networks. science.
1999;286(5439):509–512.
66. Kim M, Leskovec J. Modeling social networks with node attributes using the
multiplicative attribute graph model. arXiv preprint arXiv:11065053. 2011;.
67. Grim P, Singer D. Computational Philosophy. 2020;.
68.
Li K, Liang H, Kou G, Dong Y. Opinion dynamics model based on the cognitive
dissonance: An agent-based simulation. Information Fusion. 2020;56:1–14.
69. Hegselmann R, Krause U, et al. Opinion dynamics and bounded confidence
models, analysis, and simulation. Journal of artificial societies and social
simulation. 2002;5(3).
70. Ding B, Qian H, Zhou J. Activation functions and their characteristics in deep
neural networks. In: 2018 Chinese Control And Decision Conference (CCDC).
IEEE; 2018. p. 1836–1841.
71. Ebrahimi R, Gao J, Ghasemiesfeh G, Schoenbeck G. How complex contagions
spread quickly in preferential attachment models and other time-evolving
networks. IEEE Transactions on Network Science and Engineering.
2017;4(4):201–214.
72.
Ross B, Pilz L, Cabrera B, Brachten F, Neubaum G, Stieglitz S. Are social bots a
real threat? An agent-based model of the spiral of silence to analyse the impact
of manipulative actors in social networks. European Journal of Information
Systems. 2019;28(4):394–412.
73. Matz SC, Kosinski M, Nave G, Stillwell DJ. Psychological targeting as an
effective approach to digital mass persuasion. Proceedings of the national
academy of sciences. 2017;114(48):12714–12719.
74.
Thaler RH, Sunstein CR. Nudge: Improving decisions about health, wealth, and
happiness. Penguin; 2009.
75. Jost JT, Glaser J, Kruglanski AW, Sulloway FJ. Political conservatism as
motivated social cognition. Psychological bulletin. 2003;129(3):339.
76.
Jost JT, Federico CM, Napier JL. Political ideology: Its structure, functions, and
elective affinities. Annual review of psychology. 2009;60:307–337.
77.
Jost JT, Amodio DM. Political ideology as motivated social cognition: Behavioral
and neuroscientific evidence. Motivation and Emotion. 2012;36(1):55–64.
78.
Nasrinpour HR, Friesen MR, et al. An agent-based model of message propagation
in the facebook electronic social network. arXiv preprint arXiv:161107454. 2016;.
79. Shugars S. Good Decisions or Bad Outcomes? A Model for Group Deliberation
on Value-Laden Topics. Communication Methods and Measures. 2020; p. 1–19.
July 8, 2021 28/28