ArticlePDF Available

Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains

Authors:

Abstract

On political questions, many people prefer to consult and learn from those whose political views are similar to their own, thus creating a risk of echo chambers or information cocoons. We test whether the tendency to prefer knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even when evidence indicates that politically like-minded people are less skilled in those domains than people with dissimilar political views. Participants had multiple opportunities to learn about others’ (1) political opinions and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an incentivized shape categorization task. We find that participants falsely concluded that politically like-minded others were better at categorizing shapes and thus chose to hear from them. Participants were also more influenced by politically like-minded others, even when they had good reason not to be. These results replicate in two independent samples. The findings demonstrate that knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement. Our findings have implications for political polarization and social learning in the midst of political divisions.
Contents lists available at ScienceDirect
Cognition
journal homepage: www.elsevier.com/locate/cognit
Original Articles
Epistemic spillovers: Learning others’ political views reduces the ability to
assess and use their expertise in nonpolitical domains
Joseph Marks
a,,1
, Eloise Copland
a,1
, Eleanor Loh
a
, Cass R. Sunstein
b
, Tali Sharot
a,
a
Affective Brain Lab, Experimental Psychology, University College London, London, UK
b
Harvard Law School, Harvard University, Cambridge, MA, USA
ARTICLE INFO
Keywords:
Information-seeking
Political homophily
Influence
ABSTRACT
On political questions, many people prefer to consult and learn from those whose political views are similar to
their own, thus creating a risk of echo chambers or information cocoons. We test whether the tendency to prefer
knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even
when evidence indicates that politically like-minded people are less skilled in those domains than people with
dissimilar political views. Participants had multiple opportunities to learn about others’ (1) political opinions
and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an
incentivized shape categorization task. We find that participants falsely concluded that politically like-minded
others were better at categorizing shapes and thus chose to hear from them. Participants were also more in-
fluenced by politically like-minded others, even when they had good reason not to be. These results replicate in
two independent samples. The findings demonstrate that knowing about others’ political views interferes with
the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking de-
cisions and errors in judgement. Our findings have implications for political polarization and social learning in
the midst of political divisions.
1. Introduction
To make good choices, human beings turn to one another for in-
formation (Gino, Brooks, & Schweitzer, 2012; Hofmann, Lei, & Grant,
2009; Schrah, Dalal, & Sniezek, 2006; Yaniv & Kleinberger, 2000).
When selecting a retirement plan or deciding whether to grab an um-
brella on the way out, people are motivated to get information from the
most accurate source. Obviously, people would prefer to receive a
weather report from the weather forecaster whose predictions are 80%
correct than from the one who is wrong every other day.
At the same time, people also prefer to receive information from
others who are similar to themselves. Democrats are more likely to turn
to CNN for their news and Republicans to Fox News for their daily
updates (The Pew Research Center, 2009). This is partly because people
assume that like-minded people are more likely to be correct – a phe-
nomenon that can lead to echo chambers (Del Vicario et al., 2016;
Sunstein, 2017). But if people had clear and repeated opportunities to
learn who is right and who is wrong, would similarity interfere with the
ability to learn about accuracy?
It has been suggested that people assess others’ expertise based on
their own beliefs (Boorman, O’Doherty, Adolphs, & Rangel, 2013;
Faraji-Rad, Warlop, & Samuelsen, 2012; Faraji-Rad, Samuelsen, &
Warlop, 2015; Schilbach, Eickhoff, Schultze, Mojzisch, & Vogeley,
2013). In one study (Boorman et al., 2013) participants were asked to
evaluate financial assets while also observing the judgments made by
others before receiving feedback. The findings indicated that partici-
pants updated their beliefs about others’ expertise not only after re-
ceiving feedback about the asset’s value, but also before feedback was
available. In particular, participants took into account their own judg-
ment about the asset when updating their assessment of the other
participant’s ability on the task. When the other person’s judgment was
in accord with their own, they gave the other person credit, but they
penalized that person when their judgments conflicted. In fact, subjects
gave considerable credit to people for correct judgements with which
they agreed, but barely gave them any credit at all for accurate judg-
ments with which they disagreed. This bias interferes with the ability to
assess others’ skills, leading individuals to conclude that people who
think like them about a certain topic are more likely to be experts.
Our question, however, is whether similarity in one field will gen-
eralize to a biased assessment in another field – a kind of epistemic
https://doi.org/10.1016/j.cognition.2018.10.003
Received 13 April 2018; Received in revised form 4 October 2018; Accepted 5 October 2018
Corresponding authors.
1
These authors contributed equally.
E-mail addresses: joseph.marks.14@ucl.ac.uk (J. Marks), t.sharot@ucl.ac.uk (T. Sharot).
Cognition 188 (2019) 74–84
Available online 19 October 2018
0010-0277/ © 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/BY/4.0/).
T
spillover. If we conclude that person X is good at finance simply because
he tends to agree with us about the value of stocks, will we then be
more likely to conclude he has superior abilities in predicting the
weather? Because of the halo effect (Dion, Berscheid, & Walster, 1972;
Nisbett & Wilson, 1977; Thorndike, 1920), which is the tendency for an
evaluation in one area to influence an evaluation in another area, we
predicted this to be the case. The likely downstream behavioural con-
sequence is that people will turn to others who think like them in one
area for information in another area, even in cases where the evidence
in front of them clearly indicates that this is suboptimal.
Here, we ask whether (dis)similarity in political views interferes
with the ability to learn about another person’s competency in an un-
related task (specifically categorizing shapes) in a situation in which it
is in people’s best interest to learn who excels in the task in order to
turn to them for assistance. In the first part of our experiment, parti-
cipants had an opportunity to learn whether others (i) had similar po-
litical opinions to theirs and (ii) how well they did in a task that re-
quired learning about shapes. After rating others on these two
characteristics, they completed the second part of the experiment,
where they decided to whom to turn to for advice when solving the
shape task. They were rewarded for accuracy on the task and thus had
an economic incentive to turn to the participant who was most skilled at
the task.
We find that (dis)similarity in political views interferes with the
ability to make an accurate assessment of people’s expertise in the
domain of shapes, which leads to two central outcomes. The first is that
people chose to hear about shapes from others who are politically like-
minded, even when those people are not especially good at the shape
task, rather than to hear from people who excel at the shape task but
have different political opinions. The second is that people are more
influenced by those with similar political opinions, even when they had
the opportunity to learn that those by whom they are influenced are not
especially good at the task they are solving. The results replicate in two
independent samples. We suspect that these findings can be found in
the real world, and that they help explain a range of phenomena, in-
cluding the spread of fake news (Friggeri, Adamic, Eckles, & Cheng,
2014; Kahne & Bowyer, 2017), conspiracy theories (Del Vicario et al.,
2016), polarization (Druckman, Peterson, & Slothuus, 2013; Prior,
2007), and insufficient learning in general (Yaniv & Kleinberger, 2000;
Yaniv & Milyavsky, 2007).
2. Experiment 1
2.1. Method
2.1.1. Participants
American residents over 18 years of age who speak English were
recruited on Amazon Mechanical Turk. All participants provided de-
mographic information (see supplementary materials). Sample size was
determined using a power analysis based on a pilot study.
154 participants completed the first part of the task (Learning
Stage). Participants had to pass the learning stage test (see below) in
order to continue to the choice stage. 97 participants (34 females and
63 males, aged 20–58 years M= 34.81, SD = 9.59) passed the learning
test. Participants who passed the learning test did not differ from those
who failed on age, gender, ethnicity, language, education, income,
Fig. 1. Task. During the Learning Stage participants learned about the political opinions of four sources (represented by an animal photo) and about the sources’
accuracy on a shape task (blap task). (a) Blap trials and (b) political trials were interleaved. (a) On each blap trial a novel shape was presented and the participants
had to indicate whether they believed the shape was a blap (yes or no). They then saw the response of one of four sources represented by an animal photo. This was
followed by feedback. (b) On political trials a political statement was presented and the participants had to indicate whether they agreed with it (yes or no). They
then saw the response of one of four sources represented by an animal photo. This was followed by a reminder of their response and the source’s response. (c) During
the Choice Stage participants completed blap trials only. On each blap trial a novel shape was presented and the participant had to indicate whether they believed the
shape was a blap (yes or no) and enter a confidence rating. They were then presented with two sources and asked to choose whose answer they would like to see.
They then saw the response of the chosen source. Finally, they were given a chance to update their initial answer and confidence rating. Responses were self-paced
unless otherwise stated. (d) There were four sources represented with animal photos which the participants were led to believe were other participants but were in
fact algorithms designed to respond in the following pattern: (i) One source agreed with the subject on 80% of the political trials and was correct on 80% of blap trials
(Similar-Accurate), (ii) One source agreed with the subject on 80% of the political trials and was correct on only 50% of blap trials (Similar-Random), (iii) One source
agreed with the subject on 20% of the political trials and was correct on 80% of blap trials (Dissimilar-Accurate), and (iv) One source agreed with the subject on 20%
of the political trials and was correct on 50% of blap trials (Dissimilar-Random). Pictures assigned to sources were counter-balanced.
J. Marks et al. Cognition 188 (2019) 74–84
75
subjective socio-economic-position, political ideology, interest/in-
volvement in US politics, or generalized trust (all P > .12).
All participants were paid a base rate of $2.50. They were told they
could earn a bonus between $2.50 and $7.50 based on their perfor-
mance, but were not told exactly how performance would be measured.
Unbeknownst to the participants our rule for paying the bonus was as
follows: any participants that passed the learning stage test (see details
below) and completed the choice stage received $5 bonus.
2.1.2. Study design
2.1.2.1. Learning stage. The goal of the learning stage was to give
participants an opportunity to learn about the other participants’
(hereafter ‘sources’) political views and about their competency on
the shape task (hereafter ‘Blap task’). Before the learning stage,
participants completed four practice blap trials and four practice
political trials. They were not presented with information from
sources on practice trials.
The learning stage consisted of 8 blocks of 20 trials each (10 blap
trials and 10 political trials interleaved). Responses from one of the four
sources were shown for the duration of a block (each source was used in
two blocks), the order of which was randomized across blocks.
Qualtrics’ loop and merge tool was used to randomize the order of the
questions within each block.
1. Blap trials (Fig. 1a). On each trial, one of 204 coloured shapes was
presented on screen. Participants were required to learn through
trial and error to classify shapes as ‘blaps’ or ‘not blaps’, ostensibly
based on the shape’s features. Unbeknownst to the participants,
whether a shape was a blap or not was not rule based, but rather
randomly determined before the beginning of the task, such that
half the stimuli were categorized as “blaps”. Because participants
did not in fact have any means to learn which type of stimulus was a
blap, the average performance across participants was around 50%
(M= 48%, SD = 10.57). Participants had as much time as they
needed to enter their response with a key press indicating either
“yes” (the shape is a blap) or “no” (it is not) (M= 2.78 s,
SD = 9.27). They then observed for 1sc the response of one of the
four sources. Thereafter they received feedback on whether they and
the source were each correct or incorrect (2sc).
2. Political trials (Fig. 1b). On political trials, participants indicated
whether they agreed or disagreed with one of 84 social/political
cause-and-effect statements (e.g. “Lowering the minimum voting
age would help get young people interested in politics”, see full set
of statements in supplementary materials). These statements were
developed on the basis of various political attitude questionnaires
(see supplementary materials). Participants had as much time as
they needed to press a key button indicating whether their response
was “yes” or “no” (M= 5.89 s, SD = 16.87). They then observed for
1sc the response of one of the four sources. Thereafter they were
shown their response together with that of the source (2sc).
2.1.2.2. Sources. Participants were told that on each trial, they would
be presented with the response of one of four participants (‘sources’)
who performed the task earlier. Unbeknownst to the participants, these
sources were not in fact other people but algorithms designed to
respond in the following pattern. (i) One source agreed with the
subject on 80% of the political trials and was correct on 80% of blap
trials (Similar-Accurate). (ii) One source agreed with the subject on
80% of the political trials and was correct on only 50% of blap trials
(Similar-Random). (iii) One source agreed with the subject on 20% of
the political trials and was correct on 80% of blap trials (Dissimilar-
Accurate). (iv) One source agreed with the subject on 20% of the
political trials and was correct on 50% of blap trials (Dissimilar-
Random). On blap trials all sources agreed with the participant about
half the time on average (M= 50%, SD = 11.52). To avoid gender and
racial bias, sources were represented with a picture of an animal.
Pictures assigned to sources were counter-balanced.
2.1.2.3. Attention check. At the end of each block, participants were
presented with an attention check in which they were asked one of the
following questions regarding the last trial: “Did the source AGREE or
DISAGREE with your answer?”; “What was your last response?”;
“Which source was shown on the last trial?”; “Was the last question a
political or blap question?” For the latter two questions, 98.97% and
93.81% of participants were correct, respectively. Data was mistakenly
not saved to report accuracy of the former two questions.
2.1.2.4. Learning test. The goal of the study was to assess how similarity
affected the ability to assess competence and information-seeking
behaviour. We thus tested participants’ perception of who was similar
to them to determine if the similarity manipulation was successful.
Specifically, after the learning stage, participants were presented with
12 trials. On each trial two sources were presented and the subject had
to indicate who was more similar to them (“Who is more similar to
you?”). Each possible pair of sources (six combinations) was presented
twice for a total of twelve trials. Only participants who responded
correctly (as determined according to the similarity manipulation
described above) on eleven trials or more were considered to have
accurately assessed similarity and continued to the choice stage (=97
participants).
2.1.2.5. Ratings of similarity and accuracy. Participants then rated each
source on (1) how competent they were at determining if each object
was a blap (“How competent was the source at figuring out if each
object was a blap?” from 0 = “Very incompetent” to 100 = “Very
competent”) and (2) how similar the source was to them (“How
similar do you think this source was to you?” from 0 = “Not at all
like me” to 100 = “Exactly like me”). We did not specifically ask about
political similarity, as we wanted to avoid artificially focusing subjects’
attention on that question. While participants may have construed the
question as referring to political similarity and/or similarity on blap
performance and/or similarity to the image of the animal, this would
have only added noise to the data. As can be observed in the result
section, sources who were objectively politically similar to the subjects
were rated significantly higher on this scale, as expected.
2.1.2.6. Choice stage (Fig. 1c). The goal of the choice stage was to
assess who the participant wanted to hear from about blaps and how
they used the information they received. On each of 120 trials,
participants were presented with a novel shape and asked to indicate
with a button press whether they thought the shape was a blap (“yes” or
“no”) (RT: M= 3.46 s, SD = 53.90). They subsequently rated their
confidence in this decision (self-paced) on a scale from 0 (not at all
confident) to 100 (extremely confident). They were then presented with
a pair of sources and asked whose response they wanted to see (self-
paced) (RT: M= 2.04 s, SD = 79.13). They were then shown the
response of the chosen source for 2 s. Thereafter the shape was
presented again and participants were asked again to indicate with a
button press whether they believed the shape was a blap (“yes” or “no”)
(RT: M= 1.29 s, SD = 9.79). Lastly, participants rated their confidence
(self-paced) in their final decision.
The participants were instructed at the beginning of the choice stage
that they could alter their answer on this second guess if they wanted
to. There were 6 blocks of 20 trials each with the six source pairs
pseudo-randomized throughout each block. There were no political
trials nor feedback in the choice stage.
2.1.2.7. Second attention check. As in the learning stage, participants
were presented with an attention check question at the end of each
block in which they were asked one of the following questions: “Which
source did you NOT select on the last trial?”; “Which source did you
select on the last trial?”; “Did the source AGREE or DISAGREE with
J. Marks et al. Cognition 188 (2019) 74–84
76
your answer?” There was an error in recording this data, thus we
cannot provide accuracy rates.
2.1.2.8. Post-task ratings and debrief. Finally, participants completed a
debriefing questionnaire (see supplementary materials). During this
debrief, participants were asked once again (1) how competent each
source was at determining if each object was a blap (“How competent
was the source at figuring out if each object was a blap?” from
0 = “Very incompetent” to 100 = “Very competent”) and (2) how
similar the source was to them (“How similar do you think this
source was to you?” from 0 = “Not at all like me” to 100 = “Exactly
like me”). Analysis of post-task ratings and questions are presented in
supplementary material (see supplementary materials).
2.2. Results
2.2.1. Participants prefer to receive information about shapes from
politically like-minded sources
We first asked whom participants select to hear from on the blap
task. We find that participants sensibly prefer to hear from sources that
are more accurate on the blap task, but also prefer to hear from poli-
tically like-minded sources even when they were not very good at the
blap task (Fig. 2).
Specifically, each source was presented as an option out of two
sources on 50% of trials. Thus, if participants had no preference they
would select each source on 25% of the trials. We found that the si-
milar-accurate source was chosen most often (M= 33%, SD = 15.56;
significantly greater than chance: t(96) = 4.85, p< .001), followed by
the similar-random source (M= 30%, SD = 12.30; significantly greater
than chance level: t(96) = 3.65, p< .001), followed by the dissimilar-
accurate source (M= 24%, SD = 15.93; not different from chance
level: t(96) = −0.44, p= .66), and finally by the dissimilar-random
source (M= 13%, SD = 13.53; significantly lower than chance: t
(96) = −8.34, p< .001). Entering percentage choice into a two (si-
milar/dissimilar) by two (accurate/random) ANOVA revealed a main
effect of source accuracy (F(1, 96) = 23.32, p< .001, η
p2
= 0.20), a
main effect of political similarity (F(1, 96) = 33.67, p< .001,
η
p2
= 0.26) and an interaction (F(1, 96) = 7.22, p= .008, η
p2
= 0.07).
The interaction was due to participants selecting to hear from the ac-
curate-dissimilar source over the random-dissimilar source (t
(96) = 5.05, p< .001, d= 0.73), but revealing no preference between
the two similar sources (t(96) = 1.62, p= .11, d= 0.22). Strikingly,
participants preferred to hear from the politically like-minded source
that performed randomly on the blap task over the source that was
accurate on the blap task but dissimilar politically (t(96) = −2.10,
p= .038, d= −0.37).
Mean reaction times for source choice were as follows: accurate-
similar source (M= 3.52 s, SD = 138.29); accurate-dissimilar source
(M= 1.36 s, SD = 5.45); random-similar source (M= 1.24 s,
SD = 4.66), inaccurate-dissimilar source (M= 1.44 s, SD = 4.24).
Entering choice log reaction times into a 2 (similar/dissimilar) by 2
(accurate/random) ANOVA did not reveal any main effects or interac-
tion (all P > .13).
2.2.2. Political similarity leads to an illusory perception of competence on
the shape task
What could explain the tendency to seek information about shapes
from others who are politically like-minded? Our hypothesis was that
(dis)similarity in political views will interfere with participants’ ability
to assess others’ competence on the blap task. The rationale is that
political (dis)similarity will generate a (negative) positive view of the
source, which will generalize to the unrelated domain of shape cate-
gorization.
To test this hypothesis we first tested for a correlation between
participants’ ratings of how similar the sources were to them and how
good the sources were on the blap task. The true correlation was zero.
Nonetheless, participants had an illusory perception that the more si-
milar the source was to them, the better the source was on the shape
task (r = 0.37, p< .001, Fig. 3a).
Second, we examined how participants rated the four sources on
their ability to categorize shapes. Entering these ratings into a two
(similar/dissimilar) by two (accurate/random) repeated-measures
ANOVA revealed not only a sensible main effect of source accuracy (F
(1, 96) = 22.98, p< .001, η
p2
= 0.19), but also an illusory main effect
of source political similarity (F(1, 96) = 45.41, p< .001, η
p2
= 0.32)
and no interaction (F(1, 96) = 0.74, p= .39, η
p2
= 0.01). Although
both accurate sources were correct 80% of the time, participants rated
the similar-accurate source as more competent at the blap task
(M= 75%, SD = 12.91) than the dissimilar-accurate source (M= 63%,
SD = 18.83; comparison between the two t(96) = 5.52, p< .001,
d= 0.72). Likewise, although both random sources were accurate only
50% of the time participants rated the similar-random source as more
competent (M= 69%, SD = 17.24) than the dissimilar-random source
(M = 56%, SD = 20.26; comparison between the two t(96) =5.89,
p < .001, d = 0.73; Fig. 3B). Interestingly, the source that had dif-
ferent political views but excelled at the blap task (dissimilar-accurate)
was rated less competent on the blap task than the source that per-
formed randomly but was politically like-minded (t(96) = −2.58,
p= .011, d= −0.33).
2.2.3. An illusory perception of competence on shape task mediates the
relationship between political similarity and information-seeking behavior
The above results suggest that political similarity influenced per-
ceptions of source competence, with more politically similar sources
viewed as more competent than their equally accurate counterparts.
Does this explain the tendency to turn to politically like-minded people
for information on blaps?
To test this possibility formally, we performed a causal mediation
analysis (Fig. 3c) that asks whether the relationship between objective
political similarity and information seeking behaviour is mediated by
subjective ratings of competence on the blap task.
A multilevel modelling approach was used (Preacher, 2015), which
allows for the appropriate treatment of non-independent observations
by nesting trial-level observations within upper-level units (individual
participants). Bayesian estimation of the multilevel mediation model
was performed in the R programming language, using the open-source
software package bmlm (Vuorre & Bolger, 2017). The bmlm package
estimates regression models, with individual-level and group-level
Fig. 2. Participants prefer to receive information about shapes from politically
like-minded sources. For each participant we calculated the percentage of times
they selected to hear from each source about blaps out of all trials and averaged
across participants. As each source was presented as an option an equal number
of times, if the participants had no preference each source would be selected on
about 25% of trials. A preference (main effect) for both accurate sources over
inaccurate sources and for politically similar sources over politically dissimilar
sources was found. Error bars represent SEM.
*
p< .05,
**
p< .01,
***
p< .001.
J. Marks et al. Cognition 188 (2019) 74–84
77
parameters estimated simultaneously using Markov chain Monte Carlo
(MCMC) procedures. The default MCMC sampling procedure was em-
ployed, with 4 MCMC chains and 2000 iterations.
The mediation model examined whether perceived competence
mediates the relationship between objective political similarity and
source chosen with a predictor (X; source political similarity), mediator
(M; competence rating), and dependent variable (Y; percentage each
source was chosen). Indeed, we found a significant indirect effect of
political similarity on choice through subjective competence rating
(path ab: M
posterior
= 2.44, SD = 0.53, CI = [1.47, 3.54]).
The model shows the following. First, objective political similarity
predicted how likely the participant was to turn to a source for in-
formation about blaps (total effect: M
posterior
= 6.10, SD = 1.05,
CI = [4.06, 8.20]). Politically like-minded sources were, in general,
chosen more often. This effect was attenuated, though not eliminated,
when controlling for subjective competence ratings (path c′:
M
posterior
= 3.66, SD = 1.04, CI = [1.63, 5.71]). Second, objective po-
litical similarity was positively related to subjective competence ratings
(path a: M
posterior
= 6.32, SD = 0.94, CI = [4.45, 8.17]); similar sources
were perceived as more competent. Third, subjective competence rat-
ings predicted choice when objective political similarity was accounted
for (path b: M
posterior
= 0.39, SD = 0.06, CI = [0.27, 0.50]), suggesting
that subjective competence had a unique effect on choice.
2.2.4. Accuracy on the blap task affects perception of similarity
The above results suggest that the effect of political similarity on
participants’ choice of whom to turn to for information on blaps is
partially mediated by their (illusory) subjective perception of the
source’s competence on the blap task. One may ask, though, whether
the reverse relationship is also true. Although less intuitive, could it be
that sources that are more accurate on blaps are perceived to be more
similar and that this perceived similarity mediates a relationship be-
tween objective accuracy and information seeking behaviour?
To answer this question, we first examined how participants rated
the sources on similarity. Entering similarity ratings into a 2 (similar/
dissimilar) × 2 (accurate/random) ANOVA revealed a sensible main
effect of political similarity (F(1, 96) = 648.76, p< .001, η
p2
= 0.87)
and no significant main effect of accuracy (F(1, 96) = 0.013, p= .91,
η
p2
< 0.01). An interaction also emerged (F(1, 96) = 7.23, p= .008,
η
p2
= 0.07). The interaction was due to the fact that while both poli-
tically similar sources agreed with the participant 80% of the time on
political trials, there was an illusory perception that the more accurate
source on blaps (similar-accurate) was significantly more similar to the
subject (M= 81%, SD = 11.81) than the source that performed ran-
domly on the blap task (similar-random, M= 77%, SD = 14.15, dif-
ference between the two: t(96) = 2.48, p= .015, d= 0.33). The two
politically dissimilar sources were not rated as significantly different on
similarity (dissimilar-accurate M= 29%, SD = 20.44; dissimilar-
random M= 33%, SD = 20.03; comparison between the two t
(96) = −1.50, p= .14, d= −0.19; Fig. 4a).
The above results reveal an illusion by which a source that is more
accurate on the blap task is viewed as more similar to the self, perhaps
revealing a motivation to associate the self with successful, similar
others. We therefore conducted a second meditation analysis, using the
same procedure as above, to examine whether perceived similarity
mediates the relationship between objective accuracy and source
chosen, with a predictor (X; source accuracy), mediator (M; similarity
rating), and dependent variable (Y; percentage each source was
chosen).
Our mediation model showed that it was not the case that subjective
similarity mediated a relationship between objective accuracy on the
blap task and information seeking behaviour (Fig. 4b). We did not find
a significant effect of source accuracy on similarity rating nor did we
find evidence of an indirect effect.
In particular, the mediation showed that objective accuracy on the
blap task predicted how likely the participant was to turn to a source for
information about blaps (total effect: M
posterior
= 2.96, SD = 0.81,
CI = [1.42, 4.57]), showing that accurate sources were chosen more
often. The effect was not, however, reduced when subjective similarity
was controlled (path c′: M
posterior
= 2.74, SD = 0.69, CI = [1.41, 4.14]),
suggesting that the accuracy-related variance in source choice is not
shared with subjective similarity. Although subjective similarity ratings
Fig. 3. An illusory perception of accuracy mediates the relationship between political similarity and information seeking behavior. (a) The true correlation between
how accurate a source was on the blap task and how like-minded they were to the participant was zero. Nevertheless, participants ratings revealed an illusory
perception that the two were related (r = 0.37 p< .001). (b) Participants rated accurate sources as more competent on the blap task, but also rated politically like-
minded sources as more competent on the blap task. (c) A mediation model revealed that perceived competence partially mediated the relationship between political
similarity and choice of which source to hear from about blaps. Error bars represent SEM.
*
p< .05,
**
p< .01,
***
p< .001.
J. Marks et al. Cognition 188 (2019) 74–84
78
predicted choice when objective blap accuracy was accounted for (path
b: M
posterior
= 0.20, SD = 0.04, CI = [0.12, 0.27]), suggesting that
subjective similarity had a unique effect on choice, objective accuracy
was not predictive of subjective similarity ratings (path a:
M
posterior
= 0.11, SD = 1.48, CI = [−2.76, 3.10]), and the indirect ef-
fect of accuracy on the blap task on choice through subjective similarity
rating was not significant (path ab: M
posterior
= 0.21, SD = 0.42,
CI = [−0.51, 1.16]).
2.2.5. Participants’ shape judgments are more influenced by sources that are
politically like-minded
Thus far we find that participants are inclined to turn to sources that
are like-minded politically to receive information on blaps. Are they
also more likely to be influenced by them? We quantified the extent to
which participants were influenced by a source by calculating the
percentage of times the participant changed their answer when a source
disagreed with them (only participants who chose to hear from each
source at least once could be included in this analysis, N in-
cluded = 70).
We find that after choosing whom to listen to participants are more
influenced by the sources that are politically like-minded and more
accurate on the blap task (Fig. 5a). Participants changed their decisions
on disagreement trials most often in response to information from the
similar-accurate source (M= 62%, SD = 30.02; significantly greater
than chance: t(93) = 4.74, p< .001) followed by the dissimilar-
accurate source (M= 58%, SD = 29.59; significantly greater than
chance: t(93) = 3.31, p= .001), followed by the similar-random source
(M= 57%, SD = 29.94; significantly greater than chance: t(95) = 3.10,
p= .003) and finally by the dissimilar-random source (M= 42%,
SD = 34.72; not different from chance level: t(76) = −1.70, p= .093).
Entering percentage of answers changed on disagreement trials into
a two (similar/dissimilar) by two (accurate/random) repeated-mea-
sures ANOVA revealed a main effect of source accuracy (F(1,
69) = 8.90, p= .004, η
p2
= 0.11), a main effect of political similarity
(F(1, 69) = 7.14, p= .009, η
p2
= 0.09) and an interaction effect (F(1,
69) = 3.98, marginal p= .050, η
p2
= 0.06). Post-hoc t-tests showed
that the interaction was due to the accurate-dissimilar source having
greater influence than the random-dissimilar source (t(73) = 3.24,
p= .002, d= 0.53) while there was no difference in influence between
the two similar sources (t(92) = 1.43, p= .16, d= 0.29). Similar re-
sults are obtained when incorporating both judgement and confidence
into a measure of influence (see supplementary results).
Mean reaction times for participants’ final decisions were as follows:
accurate-similar source (M= 1.16 s, SD = 2.06); accurate-dissimilar
source (M= 1.42, SD = 10.56); random-similar source (M= 1.87,
SD = 20.05), random-dissimilar source (M= 1.73, SD = 3.62).
Entering log reaction times for participants’ decision to stick with or
alter their choice into a 2 (similar/dissimilar) by 2 (accurate/random)
ANOVA did not reveal any main effects or interaction (all P > .20).
The results suggest that both accuracy on the blap task and political
Fig. 4. Accuracy on blap task partially enhances sense of similarity. (a) Politically similar sources were rated as more similar by participants. Interestingly the
politically like-minded source that was also more accurate on blaps was rated as more similar than the politically like-minded source that was random on blaps. This
suggests that accuracy on blap task partially affected perceived similarity. (b) The reverse mediation to that tested in Fig. 3 – by which perceived similarity mediates
the effect between source accuracy and information seeking behaviour – was not significant. Error bars represent SEM.
**
p< .01,
***
p< .001.
Fig. 5. Participants’ blap judgments are more influenced by sources that are politically like-minded. (a) Participants were more likely to change their minds about
blaps when sources that were (i) more accurate at the blap task and (ii) more politically like-minded disagreed with their blap judgment than when sources that were
less accurate on blaps and/or politically different disagreed with their blap judgement. (b) A mediation model revealed that the relationship between political
similarity and source influence was mediated by perceived competence on the blap task. Error bars represent SEM.
**
p< .01.
J. Marks et al. Cognition 188 (2019) 74–84
79
similarity exert an effect on how influenced participants are by the
sources. We next conducted a mediation model to test whether the ef-
fect of political similarity on influence was mediated by perceived ac-
curacy on the blap task. Results of the multilevel mediation showed that
objective political similarity predicted source influence (total effect:
M
posterior
= 4.32, SD = 1.44, CI = [4.32, 1.54]) and was also positively
related to the subjective ratings of competence (path a: M
posterior
= 5.99,
SD = 0.93, CI = [4.16, 7.80]), which in turn predicted source influence
when source similarity was accounted for (path b: M
posterior
= 0.66,
SD = 0.11, CI = [0.44, 0.88]). The indirect effect of political similarity
on source influence through competence rating was significant (path
ab: M
posterior
= 4.09, SD = 1.01, CI = [2.20, 6.13]) and once subjective
competence rating was controlled for political similarity no longer
predicted source influence (path c′: M
posterior
= 0.23, SD = 1.39,
CI = [−2.50, 3.05]). These results demonstrate that the effect of poli-
tical similarity on influence is fully mediated by the perceived compe-
tence of the source (Fig. 5b).
Note that the conceptually reverse mediation model, with objective
source accuracy as the predictor, subjective political similarity as the
mediator and source influence as the dependent variable was not sig-
nificant (no significant effect of objective accuracy on subjective simi-
larity nor an indirect effect on source influence).
In particular, the model shows that objective accuracy on the blap
task predicted source influence (total effect: M
posterior
= 4.34,
SD = 1.37, CI = [1.65, 7.09]), showing that accurate sources had more
influence. The effect was still significant when controlling for subjective
similarity (path c′: M
posterior
= 4.56, SD = 1.35, CI = [1.91, 7.31]),
suggesting that objective accuracy had a unique effect on source in-
fluence. Again, although subjective similarity ratings predicted source
influence when objective blap accuracy was accounted for (path b:
M
posterior
= 0.19, SD = 0.05, CI = [0.08, 0.30]), suggesting that sub-
jective similarity had a unique effect on source influence, objective
accuracy was not predictive of subjective similarity ratings (path a:
M
posterior
= −1.39, SD = 1.53, CI = [−4.49, 1.62]), and the indirect
effect of accuracy on the blap task on source influence through sub-
jective similarity rating was not significant (path ab: M
posterior
= −0.22,
SD = 0.38, CI = [−0.98, 0.54]).
The results of Experiment 1 suggested that knowledge of another’s
political views interferes with the ability to learn about that person’s
competence in an unrelated task. Politically like-minded sources were
more likely to be chosen and the information they provided had a
greater influence on participants’ decisions. Our mediation analyses
suggest that participants preferred to hear from, and were more influ-
enced by, politically similar sources because they falsely believed these
sources were better at categorizing blaps than politically dissimilar
sources.
3. Experiment 2
In Experiment 2 we test whether the findings of Experiment 1 re-
plicate with minor adjustments to the methods (see below).
3.1. Methods
3.1.1. Participants
The recruitment procedure was the same as for Experiment 1. In
Experiment 2, 186 participants completed the Learning Stage. 101 (47
females and 54 males, aged 18–63 years M= 37.59, SD = 10.92)
passed the learning test and proceeded to the Choice Stage.
Participants who passed the learning test did not differ from those
who failed on age, gender, ethnicity, language, political ideology, in-
terest/involvement in US politics, or generalized trust (all P > .18).
Unlike in Experiment 1, participants that passed tended to have higher
income (t(184) = 2.06, p= .041), education (t(184) = 2.59, p= .010)
and subjective socio-economic-position (t(184) = 4.83, p< .001).
There was a strong positive correlation between performance on the
attention check and accuracy on the learning test (r = 0.44, p< .001),
suggesting that participants who passed the learning test (by answering
at least eleven out of twelve trials correctly) were more attentive than
those who failed. Participants who passed the learning test were correct
on a greater number of the attention check questions (M= 92%,
SD = 11.80) than those who failed the learning test (M= 78%,
SD = 18.54; comparison between the two t(184) = 6.31, p< .001,
d= 0.91). As in Experiment 1, participants who failed the learning test
were not progressed to the choice stage and thus did not complete the
main experimental task. In the choice stage, participants answered 74%
of the attention checks correctly (SD = 19.71).
3.1.2. Study design
The methods of Experiment 2 were the same as in Experiment 1
except for the following changes:
(i) Contrary to Experiment 1, we did not determine in advance which
stimuli were blaps. Rather, feedback was given regardless of sti-
mulus shown such that all participants were told they were correct
on exactly 50% of the blap trials and incorrect on exactly 50% of
blap trials. In contrast, in Experiment 1 participants’ accuracy rates
depended on whether a stimulus was in fact coded to be a blap or
not. Thus, accuracy rates differed across participants with an
average of 48% (SD = 10.57).
(ii) The percentage of times the sources gave the same answer to the
participant’s answer on blap trials was held constant at exactly 50%
for each subject and source. In contrast, in Experiment 1 the per-
centage of times the sources gave the same answer as the partici-
pant on blap trials was not hard-coded and normally distributed
around 50% (SD = 11.52).
(iii) The wording of one of the post-task questions was changed slightly
to read “How politically similar do you think this source was to
you?”
(iv) We added the following post-task question “How competent do
you think you were at figuring out if each object was a blap?”
0 = “Very incompetent” to 100 = “Very competent”.
(v) Attention-check data was successfully recorded.
Changes 1–3 enables us to test for replication under slightly dif-
ferent conditions. Change 4 was to test for participants’ perception of
their own ability.
3.2. Results
3.2.1. Participants prefer to receive information about shapes from
politically like-minded sources
As in Experiment 1, we find that participants sensibly prefer to hear
from sources that are more accurate on the blap task, but also prefer to
hear from politically like-minded sources even when they were not very
good at the blap task (Fig. 6). Specifically, the similar-accurate source
was chosen most often (M= 33%, SD = 14.18; significantly greater
than chance: t(100) = 5.70, p< .001), followed by the similar-random
source (M= 27%, SD = 14.16; not different from chance: t
(100) = 1.32, p= .19), followed by the dissimilar-accurate source
(M= 23%, SD = 16.72; not different from chance: t(100) = −1.45,
p= .15) and finally by the dissimilar-random source (M= 18%,
SD = 13.66; significantly lower than chance: t(100) = −5.51,
p< .001). Entering the percentage of times each participant selected
each source into a two (similar/dissimilar) by two (accurate/random)
repeated-measures ANOVA revealed a main effect of source accuracy (F
(1, 100) = 14.09, p< .001, η
p2
= 0.12) and political similarity (F(1,
100) = 25.07, p< .001, η
p2
= 0.20) with no interaction (F(1,
100) = 0.13, p= .72, η
p2
= 0.001). Participants were not more likely
to choose the source that was accurate on the blap task but dissimilar
politically (dissimilar-accurate) over the politically like-minded source
that performed randomly on the blap task (similar-random) (t
J. Marks et al. Cognition 188 (2019) 74–84
80
(100) = −1.61, p= .11, d= −0.28).
Mean reaction times for source choice were as follows: accurate-
similar source (M= 1.91, SD = 29.38); accurate-dissimilar source
(M= 1.54 s, SD = 6.17); random-similar source (M= 1.59 s,
SD = 5.94), inaccurate-dissimilar source (M= 1.53 s, SD = 6.28).
Entering choice log reaction times into a 2 (similar/dissimilar) by 2
(accurate/random) ANOVA did not reveal any main effects or interac-
tion (all P > .12).
3.2.2. Political similarity leads to an illusory perception of competence on
the shape task
As in Experiment 1, we find that participants’ ratings of how similar
the sources were to them correlated with their ratings of how compe-
tent they thought the sources were at the blap task. Specifically, par-
ticipants had an illusory perception that the more similar the source was
to them, the better the source was on the shape task (r = 0.36,
p< .001, Fig. 7a).
We then assessed how participants rated the four sources on their
ability to categorize blaps, entering these ratings into a two (similar/
dissimilar) by two (accurate/random) repeated-measures ANOVA. This
revealed a sensible main effect of source accuracy (F(1, 100) = 7.67,
p= .007, η
p2
= 0.07), an illusory main effect of source political simi-
larity (F(1, 100) = 27.88, p< .001, η
p2
= 0.22) and no interaction (F
(1, 100) = 39, p= .53, η
p2
< 0.01).
Participants rated the similar-accurate source as more competent at
the blap task (M= 72%, SD = 17.12) than the dissimilar-accurate
source (M= 62%, SD = 20.13; comparison between the two t
(100) = 4.26, p< .001, d= 0.55). Likewise, participants rated the
similar-random source as more competent (M= 67%, SD = 15.19) than
the dissimilar-random source (M = 58%, SD = 18.43; comparison be-
tween the two t(100) = 4.18, p< .001, d= 0.51; Fig. 7b). The source
that was politically like-minded but poor on the blap task (similar-
random) was rated as more competent at the blap task than the source
that performed well but had different political views (t(100) = −2.18,
p= .031, d= −0.29).
3.2.3. An illusory perception of competence on shape task mediates the
relationship between political similarity and information-seeking behavior
We next test whether participants chose to hear from the politically
similar sources because they believed they were more competent at the
blap task. That is, we ask whether the relationship between objective
political similarity and information seeking behaviour is mediated by
subjective ratings of competence on the blap task. We used the same
procedure as in Experiment 1 to perform this mediation analysis.
The model shows that objective political similarity predicted how
likely the participant was to turn to a source for information about
blaps (total effect: M
posterior
= 4.73, SD = 0.90, CI = [2.92, 6.46]), with
politically like-minded sources chosen more often. This effect was at-
tenuated, though not eliminated, when controlling for subjective
competence ratings (path c′: M
posterior
= 2.70, SD = 0.82, CI = [1.06,
4.27]). Objective political similarity was positively related to subjective
competence ratings (path a: M
posterior
= 4.45, SD = 0.89, CI = [2.73,
6.20]); similar sources were perceived as more competent. Subjective
competence ratings predicted choice when objective political similarity
was accounted for (path b: M
posterior
= 0.47, SD = 0.06, CI = [0.36,
0.58]), suggesting that subjective competence had a unique effect on
choice. Finally, we find a significant indirect effect of political simi-
larity on choice through subjective competence rating (path ab:
M
posterior
= 2.03, SD = 0.48, CI = [1.15, 3.06]).
3.2.4. Accuracy on the blap task affects perception of similarity
We next test whether sources that are more accurate on blaps are
perceived as more similar and whether this increase in perceived si-
milarity mediates the relationship between objective accuracy and in-
formation seeking behaviour.
We examined how participants rated the four sources on similarity,
entering similarity ratings into a 2 (similar/dissimilar) × 2 (accurate/
random) ANOVA. The results revealed a main effect of political simi-
larity (F(1, 100) = 596.46, p< .001, η
p2
= 0.86), no main effect of
accuracy (F(1, 100) = 0.01, p= .94, η
p2
< 0.01) and an interaction
effect (F(1, 100) = 6.94, p= .010, η
p2
= 0.07).
As in Experiment 1, for politically similar sources participants be-
lieved that the more accurate source on blaps (similar-accurate) was
significantly more similar (M= 80%, SD = 12.24) than the source that
performed randomly on the blap task (similar-random, M= 76%,
SD = 14.57, difference between the two: t(100) = 2.30, p= .024,
d= 0.30). The politically dissimilar sources were not rated as sig-
nificantly different on similarity (dissimilar-accurate M= 30%,
SD = 20.09; dissimilar-random M= 35%, SD = 20.69; comparison
between the two t(100) = −1.60, p= .11, d= −0.21; Fig. 8). Thus
our finding from Experiment 1 that sources that are both politically
similar and accurate on the blap task are viewed as more similar to the
self than sources that are politically similar but less accurate on the blap
task was replicated.
We conducted another meditation analysis to examine whether
perceived political similarity mediates the relationship between objec-
tive accuracy and source chosen, with a predictor (X; source accuracy),
mediator (M; similarity rating), and dependent variable (Y; percentage
each source was chosen).
Again, we did not find a significant effect of source accuracy on
similarity rating nor did we find evidence of an indirect effect. The
mediation showed that objective accuracy on the blap task predicted
how likely the participant was to turn to a source for information about
blaps (total effect: M
posterior
= 2.90, SD = 0.77, CI = [1.33, 4.40]),
showing that accurate sources were chosen more often. The effect was
not, however, reduced when subjective similarity was controlled (path
c′: M
posterior
= 2.83, SD = 0.70, CI = [1.46, 4.14]), suggesting that the
accuracy-related variance in source choice is not shared with subjective
similarity. Although subjective similarity ratings predicted choice when
objective blap accuracy was accounted for (path b: M
posterior
= 0.15,
SD = 0.04, CI = [0.08, 0.23]), suggesting that subjective similarity had
a unique effect on choice, objective accuracy was not predictive of
subjective similarity ratings (path a: M
posterior
= 0.52, SD = 1.52,
CI = [−2.42, 3.49]), and the indirect effect of accuracy on the blap
task on choice through subjective similarity rating was not significant
(path ab: M
posterior
= 0.07, SD = 0.32, CI = [−0.59, 0.71]).
Fig. 6. Participants prefer to receive information about shapes from politically
like-minded sources. For each participant we calculated the percentage of times
they selected to hear from each source about blaps out of all trials and averaged
across participants. As each source was presented as an option an equal number
of times if the participants had no preference each source would be selected
about 25% of trials. A preference (main effect) for both accurate sources over
inaccurate sources and for politically similar sources over politically dissimilar
sources was found. Error bars represent SEM.
*
p< .05,
**
p< .01,
***
p< .001.
J. Marks et al. Cognition 188 (2019) 74–84
81
3.2.5. Participants’ shape judgments are more influenced by sources that are
politically like-minded
As in Experiment 1, participants were more influenced by the po-
litically similar sources as well as those that are were more accurate on
the blap task (N included = 75; Fig. 9a). Participants changed their
answer most after hearing that the similar-accurate source disagreed
with them (M= 58%, SD = 31.07; significantly different from chance: t
(97) = 2.59, p= .011) followed by the dissimilar-accurate source
(M= 52%, SD = 34.39; not different from chance: t(91) = 0.51,
p= .61), followed by the similar-random source (M= 45%,
SD = 27.59; not different from chance: t(94) = −1.73, p= .088) and
finally by the dissimilar-random source (M= 41%, SD = 34.24; sig-
nificantly lower than chance: t(92) = −2.43, p= .017).
Entering percentage of answers changed out of trials in which the
source disagreed with the participants’ blap judgment into a two (si-
milar/dissimilar) by two (accurate/random) repeated-measures
ANOVA revealed a main effect of source accuracy (F(1, 74) = 11.27,
p= .001, η
p2
= 0.13), a main effect of political similarity (F(1,
74) = 7.36, p= .008, η
p2
= 0.09) and no interaction (F(1, 74) = 0.72,
p= .40, η
p2
= 0.01). Similar findings are observed when measuring
influence as a combination of judgement and confidence (see supple-
mentary results).
Mean reaction times for participants’ final decisions were as follows:
accurate-similar source (M= 1.22 s, SD = 3.61); accurate-dissimilar
source (M= 1.30 s, SD = 8.67); random-similar source (M= 1.38 s,
SD = 5.63), random-dissimilar source (M= 1.07 s, SD = 3.15).
Entering log reaction times for participants’ decision to stick with or
alter their choice into a 2 (similar/dissimilar) by 2 (accurate/random)
ANOVA did not reveal any main effects or interaction (all P > .23).
We next conducted a mediation model to test whether the effect of
political similarity on influence was mediated by perceived accuracy on
the blap task. Results of the multilevel mediation showed that objective
Fig. 7. An illusory perception of accuracy mediates the relationship between political similarity and information seeking behavior. (a) The true correlation between
how accurate a source was on the blap task and how like-minded they were to the participant was zero. Nevertheless, participants ratings revealed an illusory
perception that the two were related (r = 0.36 p< .001). (b) Participants rated accurate sources as more competent on the blap task, but also rated politically like-
minded sources as more competent on the blap task. (c) A mediation model revealed that perceived competence partially mediated the relationship between political
similarity and choice of which source to hear from about blaps. Error bars represent SEM.
*
p < .05,
**
p< .01,
***
p< .001.
Fig. 8. Accuracy on blap task partially enhances sense of similarity. (a) Politically similar sources were rated as more similar by participants. Interestingly the
politically like-minded source that was also more accurate on blaps was rated as more similar than the politically like-minded source that was random on blaps. This
suggests that accuracy on blap task partially affected perceived similarity. (b) The reverse mediation to that tested in Fig. 7c – by which perceived political similarity
mediates the effect between source accuracy and information seeking behaviour – was not significant. Error bars represent SEM.
*
p< .05,
***
p< .001.
J. Marks et al. Cognition 188 (2019) 74–84
82
political similarity predicted source influence (total effect:
M
posterior
= 2.77, SD = 1.32, CI = [0.15, 5.35]) and was also positively
related to the subjective ratings of competence (path a: M
posterior
= 4.39,
SD = 0.92, CI = [2.54, 6.22]), which in turn predicted source influence
when source similarity was accounted for (path b: M
posterior
= 0.85,
SD = 0.11, CI = [0.65, 1.06]). The indirect effect of political similarity
on source influence through competence rating was significant (path
ab: M
posterior
= 2.88, SD = 0.95, CI = [1.05, 4.81]) and once subjective
competence rating was controlled for political similarity no longer
predicted source influence (path c′: M
posterior
= −0.11, SD = 1.14,
CI = [−2.40, 2.04]). These results demonstrate that the effect of poli-
tical similarity on influence is fully mediated by the perceived compe-
tence of the source.
Note that the conceptually reverse mediation model, with objective
source accuracy as the predictor, subjective political similarity as the
mediator and source influence as the dependent variable was not sig-
nificant (no significant effect of objective accuracy on subjective poli-
tical similarity nor an indirect effect on source influence).
In particular, the model shows that objective accuracy on the blap
task predicted source influence (total effect: M
posterior
= 5.32,
SD = 1.22, CI = [2.90, 7.71]), showing that accurate sources had more
influence. The effect was still significant when controlling for subjective
similarity (path c′: M
posterior
= 5.25, SD = 1.20, CI = [2.86, 7.63]),
suggesting that objective accuracy had a unique effect on source in-
fluence. Subjective similarity ratings did not predict source influence
when objective blap accuracy was accounted for (path b:
M
posterior
= 0.09, SD = 0.05, CI = [−0.01, 0.20]), objective accuracy
was not predictive of subjective similarity ratings (path a:
M
posterior
= 0.53, SD = 1.49, CI = [−2.42, 3.55]), and the indirect ef-
fect of accuracy on the blap task on source influence through subjective
similarity rating was not significant (path ab: M
posterior
= 0.07,
SD = 0.28, CI = [−0.49, 0.65]).
4. Discussion
The current study offers three central findings. The first is that
people choose to hear from those who are politically like-minded on
topics that have nothing to do with politics (like geometric shapes) in
preference to those with greater expertise on the topic but have dif-
ferent political views. The second is that all else being equal, people are
more influenced by politically like-minded others on nonpolitical issues
such as shape categorization. The third is that people are biased to
believe that others who share their political opinions are better at tasks
that have nothing to do with politics, even when they have all the
information they need to make an accurate assessment about who is the
expert in the room. Our mediation analysis suggests that it is this illu-
sion that underlies participants’ tendency to seek and use information
from politically like-minded others.
A great deal of attention has recently been paid to what sources of
political information people choose (Prior, 2007; Sunstein 2017), how
algorithms affect what they see (Garimella, Morales, Gionis, &
Mathioudakis, 2018; Hannak et al., 2013; Sîrbu, Pedreschi, Giannotti, &
Kertész, 2018; Sunstein, 2017), and how people are affected by en-
countering diverse information on political issues (Colleoni, Rozza, &
Arvidsson, 2014; Druckman et al., 2013; Kahan, 2016; Tappin, van der
Leer & McKay, 2017). There is also growing interest in how political
affiliations affect people’s affective responses to those with different
affiliations (Iyengar, Sood, & Lelkes, 2012; Iyengar & Westwood, 2015).
Our focus here has been on epistemic spillovers – on whether and
how a sense of shared political convictions influences people’s desire to
consult and to use people’s views on a task that is entirely unrelated to
politics. The most striking finding is that people consult and are influ-
enced by the judgments of those with shared political convictions even
when they had observed evidence suggesting that those with different
convictions are far more likely to offer the right answer.
While we manipulated similarity on political views, we hypothesize
that similar findings may be observed when similarity is manipulated
along other dimensions that are significant to people (e.g., music or
literature preferences, hobbies etc.), a hypothesis that warrants em-
pirical testing. Moreover, it would be of interest to test whether people
are also more influenced by the like-minded when they receive in-
formation from sources passively and not actively (as in our study).
What accounts for our findings? We have referred to the halo effect:
If people think that products or people are good along some dimension,
they tend to think that they are good along other dimensions as well
(Dion et al., 1972; Nisbett & Wilson, 1977; Thorndike, 1920). If people
have an automatic preference for those who share their political con-
victions, their positive feelings may spill over into evaluation of other,
unrelated characteristics (including their ability to identify blaps). This
is one consequence of political tribalism.
A related explanation involves a heuristic, or mental shortcut, which
often works well, but which can lead to severe and systematic errors
(Kahneman & Frederick, 2002): If people generally believe that politi-
cally like-minded people are particularly worth consulting, they might
extend that belief to contexts in which the belief does not make much
sense. It is possible to take our findings here as evidence that people
regularly use a heuristic of this kind and thus give special weight to the
conclusions of those with similar political convictions, even when they
Fig. 9. Participants’ blap judgments are more influenced by sources that are politically like-minded. (a) Participants were more likely to change their minds about
blaps when sources that were (i) more accurate at the blap task and (ii) more politically like-minded disagreed with them than when sources that were random at the
blap task or dissimilar disagreed with them. (b) A mediation model revealed that the relationship between political similarity and source influence was mediated by
perceived accuracy on blap task. Error bars represent SEM.
+
p< .10,
*
p< .05, **p< .01,
***
p< .001.
J. Marks et al. Cognition 188 (2019) 74–84
83
know, or have reason to know, that those conclusions do not deserve
that weight.
Our findings have implications for the spread of false news, for
political polarization, and for social divisions more generally. A great
deal of false news is political (Kuklinski, Quirk, Jerit, Schweider, &
Rich, 2000; Kull, Ramsay, & Lewis, 2003) and it is spread by and among
like-minded people (Del Vicario et al., 2016). But our findings suggest
that among the politically like-minded, false news will spread even if it
has little or nothing to do with politics, or even if the connection to
politics is indirect and elusive. Suppose, for example, that someone with
congenial political convictions spreads a rumor about a coming collapse
in the stock market, a new product that supposedly cures cancer or
baldness, cheating in sports, an incipient epidemic, or a celebrity who
has shown some terrible moral failure. Even if the rumor is false, and
even if those who hear it have reason to believe that it is false, they may
well find it credible (and perhaps spread it).
The results help identify both a cause and a consequence of political
polarization. If people trust like-minded others not only on political
questions (Nyhan & Reifler, 2010) but also on questions that have
nothing at all to do with politics, the conditions are ripe for sharp social
divisions, potentially leading people to live in different epistemic uni-
verses.
Acknowledgement
This work was funded by a Wellcome Trust Career Development
Fellowship (to T.S.).
Appendix A. Supplementary material
Supplementary data to this article can be found online at https://
doi.org/10.1016/j.cognition.2018.10.003.
References
Boorman, E. D., O’Doherty, J. P., Adolphs, R., & Rangel, A. (2013). The behavioral and
neural mechanisms underlying the tracking of expertise. Neuron, 80(6), 1558–1571.
https://doi.org/10.1016/j.neuron.2013.10.024.
Colleoni, E., Rozza, A., & Arvidsson, A. (2014). Echo chamber or public sphere?
Predicting political orientation and measuring political homophily in Twitter using
big data. Journal of Communication, 64(2), 317–332. https://doi.org/10.1111/jcom.
12084.
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., & Quattrociocchi,
W. (2016). The spreading of misinformation online. Proceedings of the National
Academy of Sciences, 113(3), 554–559. https://doi.org/10.1073/pnas.1517441113.
Dion, K., Berscheid, E., & Walster, E. (1972). What is beautiful is good. Journal of
Personality and Social Psychology, 24(3), 285. https://doi.org/10.1037/h0033731.
Druckman, J. N., Peterson, E., & Slothuus, R. (2013). How elite partisan polarization
affects public opinion formation. American Political Science Review, 107(1), 57–79.
https://doi.org/10.1017/S0003055412000500.
Faraji-Rad, A., Samuelsen, B. M., & Warlop, L. (2015). On the persuasiveness of similar
others: The role of mentalizing and the feeling of certainty. Journal of Consumer
Research, 42(3), 458–471. https://doi.org/10.1093/jcr/ucv032.
Faraji-Rad, A., Warlop, L., & Samuelsen, B. (2012). When the message “feels right”: When
and how does source similarity enhance message persuasiveness? Advances in
Consumer Research, 40, 682–683. http://acrwebsite.org/volumes/1011701/volumes/
v40/NA-40.
Friggeri, A., Adamic, L. A., Eckles, D., & Cheng, J. (2014). Rumor cascades. Proceedings of
the eighth international AAAI conference on weblogs and social media.
Garimella, K., Morales, G. D. F., Gionis, A., & Mathioudakis, M. (2018). Political discourse
on social media: Echo chambers, gatekeepers, and the price of bipartisanship. arXiv
preprint, arXiv:1801.01665.
Gino, F., Brooks, A. W., & Schweitzer, M. E. (2012). Anxiety, advice, and the ability to
discern: Feeling anxious motivates individuals to seek and use advice. Journal of
Personality and Social Psychology, 102(3), 497–512. https://doi.org/10.1037/
a0026413.
Hannak, A., Sapiezynski, P., Molavi Kakhki, A., Krishnamurthy, B., Lazer, D., Mislove, A.,
& Wilson, C. (2013). Measuring personalization of web search. WWW’13 Proceedings
of the 22nd international conference on world wide web (pp. 527–538). . https://doi.org/
10.1145/2488388.2488435.
Hofmann, D. A., Lei, Z., & Grant, A. M. (2009). Seeking help in the shadow of doubt: The
sensemaking processes underlying how nurses decide whom to ask for advice. Journal
of Applied Psychology, 94(5), 1261–1274. https://doi.org/10.1037/a0016557.
Iyengar, S., Sood, G., & Lelkes, Y. (2012). Affect, not ideology: A social identity per-
spective on polarization. Public Opinion Quarterly, 76(3), 405–431. https://doi.org/
10.1093/poq/nfs038.
Iyengar, S., & Westwood, S. J. (2015). Fear and loathing across party lines: New evidence
on group polarization. American Journal of Political Science, 59(3), 690–707. https://
doi.org/10.1111/ajps.12152.
Kahan, D. M. (2016). The politically motivated reasoning paradigm, Part 1: What poli-
tically motivated reasoning is and how to measure it. In R. A. Scott, & S. M. Kosslyn
(Eds.). Emerging trends in the social and behavioral sciences.
Kahne, J., & Bowyer, B. (2017). Educating for democracy in a partisan age: Confronting
the challenges of motivated reasoning and misinformation. American Educational
Research Journal, 54(1), 3–34. https://doi.org/10.3102/0002831216679817.
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution
in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.). Heuristics and
biases: The psychology of intuitive judgment (pp. 49–81). New York, NY: Cambridge
University Press.
Kuklinski, J. H., Quirk, P. J., Jerit, J., Schweider, D., & Rich, R. F. (2000). Misinformation
and the currency of democratic citizenship. The Journal of Politics, 62(3), 790–816.
https://doi.org/10.1111/0022-3816.00033.
Kull, S., Ramsay, C., & Lewis, E. (2003). Misperceptions, the media, and the Iraq war.
Political Science Quarterly, 118(4), 569–598. https://doi.org/10.1002/j.1538-165X.
2003.tb00406.x.
Nisbett, R. E., & Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration
of judgments. Journal of Personality and Social Psychology, 35(4), 250. https://doi.org/
10.1037/0022-3514.35.4.250.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political mis-
perceptions. Political Behavior, 32(2), 303–330. http://www.jstor.org/stable/
40587320.
Preacher, K. J. (2015). Advances in mediation analysis: A survey and synthesis of new
developments. Annual Review of Psychology, 66(1), 825–852. https://doi.org/10.
1146/annurev-psych-010814-015258.
Prior, M. (2007). Post-broadcast democracy: How media choice increases inequality in poli-
tical involvement and polarizes elections. New York, NY: Cambridge University Press.
Schilbach, L., Eickhoff, S. B., Schultze, T., Mojzisch, A., & Vogeley, K. (2013). To you I am
listening: Perceived competence of advisors influences judgment and decision-
making via recruitment of the amygdala. Social Neuroscience, 8(3), 189–202. https://
doi.org/10.1080/17470919.2013.775967.
Schrah, G. E., Dalal, R. S., & Sniezek, J. A. (2006). No decision-maker is an Island:
Integrating expert advice with information acquisition. Journal of Behavioral Decision
Making, 19(1), 43–60. https://doi.org/10.1002/bdm.514.
Sîrbu, A., Pedreschi, D., Giannotti, F., & Kertész, J. (2018). Algorithmic bias amplifies
opinion polarization: A bounded confidence model. arXiv preprint, arXiv:1803.
02111.
Sunstein, C. R. (2017). # Republic: Divided democracy in the age of social media. Princeton,
NJ: Princeton University Press.
Tappin, B. M., van der Leer, L., & McKay, R. T. (2017). The heart trumps the head:
Desirability bias in political belief revision. Journal of Experimental Psychology:
General, 146(8), 1143–1149. https://doi.org/10.1037/xge0000298.
The Pew Research Center (2009). Partisanship and cable news audiences, October 30.
Retrieved from http://www.pewresearch.org/2009/10/30/partisanship-and-cable-
news-audiences/ (accessed March 29, 2018).
Thorndike, E. L. (1920). A constant error in psychological ratings. Journal of Applied
Psychology, 4(1), 25–29.
Vuorre, M., & Bolger, N. (2017). Within-subject mediation analysis for experimental data
in cognitive psychology and neuroscience. Behavior Research Methods, 1–19. https://
doi.org/10.3758/s13428-017-0980-9.
Yaniv, I., & Kleinberger, E. (2000). Advice taking in decision making: Egocentric dis-
counting and reputation formation. Organizational Behavior and Human Decision
Processes, 83(2), 260–281. https://doi.org/10.1006/obhd.2000.2909.
Yaniv, I., & Milyavsky, M. (2007). Using advice from multiple sources to revise and im-
prove judgments. Organizational Behavior and Human Decision Processes, 103(1),
104–120. https://doi.org/10.1016/j.obhdp.2006.05.006.
J. Marks et al. Cognition 188 (2019) 74–84
84
... 7 Recently, the term "epistemic spillover" has been used to describe a distinct phenomenon. In an influential article, Marks et al. (2019) find that political partisans tend to inflate (deflate) the credibility of testimony from ingroup (outgroup) members even on subject matters having nothing to do with politics. In particular, they report that learning that someone is politically like-minded led participants in the study to take that person's judgment more seriously even on something totally unrelated to politics-in this case, a geometric shape recognition task. ...
Article
Full-text available
Epistemic paternalism involves interfering with the inquiry of others, without their consent, for their own epistemic good. Recently, such paternalism has been discussed as a method of getting the public to have more accurate views on important policy matters. Here, I discuss a novel problem for such paternalism— epistemic spillovers . The problem arises because what matters for rational belief is one’s total evidence, and further, individual pieces of evidence can have complex interactions. Because of this, justified epistemic paternalism requires the would-be paternalist to be in an unusually strong epistemic position, one that most would-be paternalists are unlikely to meet.
... The temptation to rationalize one's behavior is particularly strong in politics because it tends to be zero-sum and tribal. A downside of a tribal dynamic is that we sometimes must ignore good reasons and evidence from those in opposing tribes to signal tribal loyalty (Marks et. al., 2019). This creates an incentive to rationalize actions in a fashion that makes one look good to others in their tribe, while also engaging in self-interested actions. There is a strong motivation to look good to others within our political tribe, but there is also a strong incentive to benefit as much as possible at little cost. The politica ...
Article
Full-text available
Why do we vote, protest, and boycott? Economists explain partisan actions, despite their costs, by arguing political irrationality by a single partisan isn’t costly to them as an individual - they can afford the political irrationality, despite the social costs. And some philosophers worry about the moral and epistemic costs of political irrationality. Here I argue that political irrationality has some benefits: it encourages partisans to engage in virtue signaling and rationalization in politics. And while virtue signaling and rationalization are often epistemically and morally bad, they can nonetheless confer benefits too, like facilitating societal and moral progress.
... Supporting this notion in a different psychological field, classic work on the impact of social influence has revealed that individuals often turn to fellow group members when forming their beliefs and judgements of information (Asch, 1956;Cialdini & Goldstein, 2004;Hogg & Terry, 2000). This may be political ingroup members, as individuals prefer to receive advice, even in non-political domains from political ingroups (Marks et al., 2019). Research on social identity theory (Tajfel & Turner, 1979) -that is, our sense of who we are based on group membership(s) -has further shown that the extent to which individuals are swayed by external group cues also depends on their identification with the group (Ellemers et al., 2002), their need for group belonging (Clarkson et al., 2013), and their fear of rejection from the group (Jetten & Hornsey, 2011). ...
... Challenges of political polarization that plague civic life increasingly permeate workplace interactions. Research demonstrates that people prefer to receive task advice from someone who shares their political views (Marks et al., 2019) and workers demand a higher wage from opposing party employers (McConnell et al., 2018). Even in educational settings, professional students are less willing to engage in teamwork or seek advice from disagreeing peers (Yeomans et al., 2020). ...
Article
Full-text available
Lack of trust is a key barrier to collaboration in organizations and is exacerbated in contexts when employees subscribe to different ideological beliefs. Across five preregistered experiments, we find that people judge ideological opponents as more trustworthy when opposing opinions are expressed through a self-revealing personal narrative than through either data or stories about third parties—even when the content of the messages is carefully controlled to be consistent. Trust does not suffer when explanations grounded in self-revealing personal narratives are augmented with data, suggesting that our results are not driven by quantitative aversion. Perceptions of trustworthiness are mediated by the speaker’s apparent vulnerability and are greater when the self-revelation is of a more sensitive nature. Consequently, people are more willing to collaborate with ideological opponents who support their views by embedding data in a self-revealing personal narrative, rather than relying on data-only explanations. We discuss the implications of these results for future research on trust as well as for organizational practice.
Article
How can we encourage individuals to engage with beneficial ideas, while eschewing dark ideas such as science denial, conspiracy theories, or populist rhetoric? This paper investigates the mechanisms underpinning individuals’ engagement with ideas, proposing a model grounded in education, social networks, and pragmatic prospection. Beneficial ideas enhance decision-making, improving individual and societal outcomes, while dark ideas lead to suboptimal consequences, such as diminished trust in institutions and health-related harm. Using a Structural Equation Model (SEM) based on survey data from 7000 respondents across seven European countries, we test hypotheses linking critical thinking, network dynamics, and pragmatic prospection (i.e., a forward-looking mindset) to the value individuals ascribe to engaging with ideas, their ability to identify positive and dark ideas effectively, how individuals subsequently engage with ideas, and who they engage in them with. Our results highlight two key pathways: one linking pragmatic prospection to network-building and idea-sharing, and another connecting critical reasoning and knowledge acquisition to effective ideas engagement. Together, these pathways illustrate how interventions in education, network development, and forward-planning can empower individuals to critically evaluate and embrace positive ideas while rejecting those that might be detrimental. The paper concludes with recommendations for policy and future research to support an ideas-informed society.
Preprint
Full-text available
How can we encourage individuals to engage with beneficial ideas, while eschewing dark ideas such as science denial, conspiracy theories, or populist rhetoric? This paper investigates the mechanisms underpinning individuals' engagement with ideas, proposing a model grounded in education, social networks, and pragmatic prospection. Beneficial ideas enhance decision-making, improving individual and societal outcomes, while dark ideas lead to suboptimal consequences, such as diminished trust in institutions and health-related harm. Using a Structural Equation Model (SEM) based on survey data from 7,000 respondents across seven European countries, we test hypotheses linking critical thinking, network dynamics, and pragmatic prospection (i.e. forward-looking mindsets) to the value individuals ascribe to engaging with ideas, their ability to identify positive and dark ideas effectively and how individuals subsequently engage with ideas and who they engage in them with. Our results highlight two key pathways: one linking pragmatic prospection to network-building and idea-sharing, and another connecting critical reasoning and knowledge acquisition to effective ideas engagement. Together, these pathways illustrate how interventions in education, network development, and forward-planning can empower individuals to critically evaluate and embrace positive ideas while rejecting those that might be detrimental. The paper concludes with recommendations for policy and future research to support an ideas-informed society.
Article
Full-text available
Despite a rapid increase in research on the underpinnings of misinformation susceptibility, scholars still disagree about the relative impacts of social context and individual cognitive factors. We argue that cognitive reflection and identity-based network homogeneity may have unique influences on different types of misinformation. Specifically, identity-based network homogeneity predicts bias that is related to any type of identity-based information (i.e., political rumors), and cognitive reflection is more tailored toward truth discernment (i.e., fake news headlines). We conducted our study using an online sample (N = 214) split evenly between Democrats and Republicans and collected data on personal network composition, cognitive reflection, as well as susceptibility, sentiments, and sharing behavior in relation to political rumors and misinformation, respectively. Results demonstrate that where network homogeneity predicts belief and sharing in both political rumors and fake news headlines, cognitive reflection only predicts belief and sharing of fake news headlines. Social vs. cognitive factors for predicting different types of misinformation are discussed.
Article
Full-text available
The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance fragmentation and polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards opinion fragmentation, which emerges also in conditions where the original model would predict consensus, b) increased polarisation of opinions and c) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Fragmentation and polarization are augmented by a fragmented initial population.
Article
Full-text available
Statistical mediation allows researchers to investigate potential causal effects of experimental manipulations through intervening variables. It is a powerful tool for assessing the presence and strength of postulated causal mechanisms. Although mediation is used in certain areas of psychology, it is rarely applied in cognitive psychology and neuroscience. One reason for the scarcity of applications is that these areas of psychology commonly employ within-subjects designs, and mediation models for within-subjects data are considerably more complicated than for between-subjects data. Here, we draw attention to the importance and ubiquity of mediational hypotheses in within-subjects designs, and we present a general and flexible software package for conducting Bayesian within-subjects mediation analyses in the R programming environment. We use experimental data from cognitive psychology to illustrate the benefits of within-subject mediation for theory testing and comparison.
Article
Full-text available
Understanding how individuals revise their political beliefs has important implications for society. In a preregistered study (N = 900), we experimentally separated the predictions of 2 leading theories of human belief revision—desirability bias and confirmation bias—in the context of the 2016 U.S. presidential election. Participants indicated who they desired to win, and who they believed would win, the election. Following confrontation with evidence that was either consistent or inconsistent with their desires or beliefs, they again indicated who they believed would win. We observed a robust desirability bias—individuals updated their beliefs more if the evidence was consistent (vs. inconsistent) with their desired outcome. This bias was independent of whether the evidence was consistent or inconsistent with their prior beliefs. In contrast, we found limited evidence of an independent confirmation bias in belief updating. These results have implications for the relevant psychological theories and for political belief revision in practice.
Article
Full-text available
This article investigates youth judgments of the accuracy of truth claims tied to controversial public issues. In an experiment embedded within a nationally representative survey of youth ages 15 to 27 (N = 2,101), youth were asked to judge the accuracy of one of several simulated online posts. Consistent with research on motivated reasoning, youth assessments depended on (a) the alignment of the claim with one’s prior policy position and to a lesser extent on (b) whether the post included an inaccurate statement. To consider ways educators might improve judgments of accuracy, we also investigated the influence of political knowledge and exposure to media literacy education. We found that political knowledge did not improve judgments of accuracy but that media literacy education did.
Article
Full-text available
Significance The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web is a fruitful environment for the massive diffusion of unverified rumors. In this work, using a massive quantitative analysis of Facebook, we show that information related to distinct narratives––conspiracy theories and scientific news––generates homogeneous and polarized communities (i.e., echo chambers) having similar information consumption patterns. Then, we derive a data-driven percolation model of rumor spreading that demonstrates that homogeneity and polarization are the main determinants for predicting cascades’ size.
Article
Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared ('echo'), and the place that allows its exposure ('chamber' --- the social network), and examine closely at how these two components interact. We define a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we find that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also find that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a 'price of bipartisanship' in terms of their network centrality and content appreciation. In addition, we study the role of 'gatekeepers', users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these findings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging.
Chapter
Recent research identifies politically motivated reasoning as the source of persistent public conflict over policy-relevant facts. This essay, the first in a two-part set, presents a basic conceptual model—the Politically Motivated Reasoning Paradigm—and an experimental setup—the PMRP design—geared to distinguishing the influence of PMRP from a truth-seeking Bayesian process of information processing and from recurring biases understood to be inimical to the same. It also discusses alternative schemes for operationalizing “motivating” group predispositions and the characteristics of valid study samples for examining this phenomenon.
Article
The media environment is changing. Today in the United States, the average viewer can choose from hundreds of channels, including several twenty-four hour news channels. News is on cell phones, on iPods, and online; it has become a ubiquitous and unavoidable reality in modern society. The purpose of this book is to examine systematically, how these differences in access and form of media affect political behaviour. Using experiments and new survey data, it shows how changes in the media environment reverberate through the political system, affecting news exposure, political learning, turnout, and voting behavior.